日B视频 亚洲,啪啪啪网站一区二区,91色情精品久久,日日噜狠狠色综合久,超碰人妻少妇97在线,999青青视频,亚洲一区二卡,让本一区二区视频,日韩网站推荐

0
  • 聊天消息
  • 系統(tǒng)消息
  • 評(píng)論與回復(fù)
登錄后你可以
  • 下載海量資料
  • 學(xué)習(xí)在線課程
  • 觀看技術(shù)視頻
  • 寫文章/發(fā)帖/加入社區(qū)
會(huì)員中心
創(chuàng)作中心

完善資料讓更多小伙伴認(rèn)識(shí)你,還能領(lǐng)取20積分哦,立即完善>

3天內(nèi)不再提示

NPU和CPU對(duì)比運(yùn)行速度有何不同?基于i.MX 8M Plus處理器的MYD-JX8MPQ開發(fā)板

米爾電子 ? 2022-05-09 16:46 ? 次閱讀
加入交流群
微信小助手二維碼

掃碼添加小助手

加入工程師交流群

參考

https://www.toradex.cn/blog/nxp-imx8ji-yueiq-kuang-jia-ce-shi-machine-learning

IMX-MACHINE-LEARNING-UG.pdf


CPU和NPU圖像分類

cd /usr/bin/tensoRFlow-lite-2.4.0/examples

CPU運(yùn)行

./label_image -m mobilenet_v1_1.0_224_quant.tflite -i grace_hopper.bmp -l labels.txt

INFO: Loaded model mobilenet_v1_1.0_224_quant.tflite

INFO: resolved reporter

INFO: invoked

INFO: averagetime:50.66ms

INFO: 0.780392: 653 military unIForm

INFO: 0.105882: 907 Windsor tie

INFO: 0.0156863: 458 bow tie

INFO: 0.0117647: 466 bulletproof vest

INFO: 0.00784314: 835 suit


GPU/NPU加速運(yùn)行

./label_image -m mobilenet_v1_1.0_224_quant.tflite -i grace_hopper.bmp -l labels.txt-a 1

INFO: Loaded model mobilenet_v1_1.0_224_quant.tflite

INFO: resolved reporter

INFO: Created TensorFlow Lite delegate for NNAPI.

INFO: Applied NNAPI delegate.

INFO: invoked

INFO: average time:2.775ms

INFO: 0.768627: 653 military uniform

INFO: 0.105882: 907 Windsor tie

INFO: 0.0196078: 458 bow tie

INFO: 0.0117647: 466 bulletproof vest

INFO: 0.00784314: 835 suit

USE_GPU_INFERENCE=0./label_image -m mobilenet_v1_1.0_224_quant.tflite -i grace_hopper.bmp -l labels.txt--external_delegate_path=/usr/lib/libvx_delegate.so

Python運(yùn)行

python3 label_image.py

INFO: Created TensorFlow Lite delegate for NNAPI.

Applied NNAPI delegate.

WARM-up time:6628.5ms

Inference time: 2.9 ms

0.870588: military uniform

0.031373: Windsor tie

0.011765: mortarboard

0.007843: bow tie

0.007843: bulletproof vest


基準(zhǔn)測(cè)試CPU單核運(yùn)行

./benchmark_model --graph=mobilenet_v1_1.0_224_quant.tflite

STARTING!

Log parameter values verbosely: [0]

Graph: [mobilenet_v1_1.0_224_quant.tflite]

Loaded model mobilenet_v1_1.0_224_quant.tflite

The input model file size (MB): 4.27635

Initialized session in 15.076ms.

Running benchmark for at least 1 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds.

count=4 first=166743 curr=161124 min=161054 max=166743avg=162728std=2347

Running benchmark for at least 50 iterations and at least 1 seconds but terminate if exceeding 150 seconds.

count=50 first=161039 curr=161030 min=160877 max=161292 avg=161039std=94

Inference timings in us: Init: 15076, First inference: 166743, Warmup (avg):162728, Inference (avg):161039

Note: as the benchmark tool itself affects memory footprint, the following is only APPROXIMATE to the actual memory footprint of the model at runtime. Take the information at your discretion.

Peak memory footprint (MB): init=2.65234 overall=9.00391

CPU多核運(yùn)行

./benchmark_model --graph=mobilenet_v1_1.0_224_quant.tflite --num_threads=4

4核--num_threads設(shè)置為4性能最好

STARTING!

Log parameter values verbosely: [0]

Num threads: [4]

Graph: [mobilenet_v1_1.0_224_quant.tflite]

#threads used for CPU inference: [4]

Loaded model mobilenet_v1_1.0_224_quant.tflite

The input model file size (MB): 4.27635

Initialized session in 2.536ms.

Running benchmark for at least 1 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds.

count=11 first=48722 curr=44756 min=44597 max=49397 avg=45518.9 std=1679

Running benchmark for at least 50 iterations and at least 1 seconds but terminate if exceeding 150 seconds.

count=50 first=44678 curr=44591 min=44590 max=50798avg=44965.2std=1170

Inference timings in us: Init: 2536, First inference: 48722, Warmup (avg):45518.9, Inference (avg):44965.2

Note: as the benchmark tool itself affects memory footprint, the following is only APPROXIMATE to the actual memory footprint of the model at runtime. Take the information at your discretion.

Peak memory footprint (MB): init=1.38281 overall=8.69922

GPU/NPU加速

./benchmark_model --graph=mobilenet_v1_1.0_224_quant.tflite --num_threads=4 --use_nnapi=true

STARTING!

Log parameter values verbosely: [0]

Num threads: [4]

Graph: [mobilenet_v1_1.0_224_quant.tflite]

#threads used for CPU inference: [4]

Use NNAPI: [1]

NNAPI accelerators available: [vsi-npu]

Loaded model mobilenet_v1_1.0_224_quant.tflite

INFO: Created TensorFlow Lite delegate for NNAPI.

Explicitly applied NNAPI delegate, and the model graph will be completely executed by the delegate.

The input model file size (MB): 4.27635

Initialized session in 3.968ms.

Running benchmark for at least 1 iterations and at least 0.5 seconds but terminate if exceeding 150 seconds.

count=1 curr=6611085

Running benchmark for at least 50 iterations and at least 1 seconds but terminate if exceeding 150 seconds.

count=369 first=2715 curr=2623 min=2572 max=2776avg=2634.2std=20

Inference timings in us: Init: 3968, First inference: 6611085, Warmup (avg): 6.61108e+06, Inference (avg): 2634.2

Note: as the benchmark tool itself affects memory footprint, the following is only APPROXIMATE to the actual memory footprint of the model at runtime. Take the information at your discretion.

Peak memory footprint (MB): init=2.42188 overall=28.4062

結(jié)果對(duì)比

CPU運(yùn)行CPU多核多線程NPU加速
圖像分類50.66 ms2.775 ms
基準(zhǔn)測(cè)試161039uS44965.2uS2634.2uS

OpenCV DNN

cd /usr/share/OpenCV/samples/bin

./example_dnn_classification --input=dog416.png --zoo=models.yml squeezenet

下載模型

cd /usr/share/opencv4/testdata/dnn/

python3 download_models_basic.py

圖像分類

cd /usr/share/OpenCV/samples/bin

./example_dnn_classification --input=dog416.png --zoo=models.yml squeezenet

e2a1f644-c70d-11ec-8521-dac502259ad0.jpg


文件瀏覽器地址欄輸入

ftp://ftp.toradex.cn/Linux/i.MX8/eIQ/OpenCV/Image_Classification.zip

下載文件

解壓得到文件models.yml和squeezenet_v1.1.caffemodel

cd /usr/share/OpenCV/samples/bin

將文件導(dǎo)入到開發(fā)板的/usr/share/OpenCV/samples/bin目錄下

$cp/usr/share/opencv4/testdata/dnn/dog416.png /usr/share/OpenCV/samples/bin/
$cp/usr/share/opencv4/testdata/dnn/squeezenet_v1.1.prototxt /usr/share/OpenCV/samples/bin/
$cp/usr/share/OpenCV/samples/data/dnn/classification_classes_ILSVRC2012.txt /usr/share/OpenCV/samples/bin/
$ cd /usr/share/OpenCV/samples/bin/

圖片輸入

./example_dnn_classification --input=dog416.png --zoo=models.yml squeezenet

報(bào)錯(cuò)

root@myd-jx8mp:/usr/share/OpenCV/samples/bin# ./example_dnn_classification --input=dog416.png --zoo=model.yml squeezenet

ERRORS:

Missing parameter: 'mean'

Missing parameter: 'rgb'

加入?yún)?shù)--rgb 和 --mean=1

還是報(bào)錯(cuò)加入?yún)?shù)--mode

root@myd-jx8mp:/usr/share/OpenCV/samples/bin# ./example_dnn_classification --rgb --mean=1 --input=dog416.png --zoo=models.yml squeezenet

[WARN:0]global/usr/src/debug/opencv/4.4.0.imx-r0/git/modules/videoio/src/cap_gstreamer.cpp (898) open OpenCV | GStreamer warning: unable to query duration of stream

[WARN:0]global/usr/src/debug/opencv/4.4.0.imx-r0/git/modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=0, duration=-1

root@myd-jx8mp:/usr/share/OpenCV/samples/bin#./example_dnn_classification --rgb --mean=1 --input=dog416.png --zoo=models.yml squeezenet --mode

[WARN:0]global/usr/src/debug/opencv/4.4.0.imx-r0/git/modules/videoio/src/cap_gstreamer.cpp (898) open OpenCV | GStreamer warning: unable to query duration of stream

[WARN:0]global/usr/src/debug/opencv/4.4.0.imx-r0/git/modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=0, duration=-1

視頻輸入

./example_dnn_classification --device=2 --zoo=models.yml squeezenet

問題

如果testdata目錄下沒有文件,則查找下

lhj@DESKTOP-BINN7F8:~/myd-jx8mp-yocto$ find . -name "dog416.png"

./build-xwayland/tmp/work/cortexa53-crypto-mx8mp-poky-linux/opencv/4.4.0.imx-r0/extra/testdata/dnn/dog416.png

再將相應(yīng)的文件復(fù)制到開發(fā)板

cd./build-xwayland/tmp/work/cortexa53-crypto-mx8mp-poky-linux/opencv/4.4.0.imx-r0/extra/testdata/

tar -cvf /mnt/e/dnn.tar ./dnn/

cd/usr/share/opencv4/testdata目錄不存在則先創(chuàng)建

rz導(dǎo)入dnn.tar

解壓tar -xvf dnn.tar

terminate calLEDafter throwing an instance of 'cv::Exception'

what():OpenCV(4.4.0)/usr/src/debug/opencv/4.4.0.imx-r0/git/samples/dnn/classification.cpperrorAssertion failed) !model.empty() in function 'main'

Aborted

lhj@DESKTOP-BINN7F8:~/myd-jx8mp-yocto/build-xwayland$ find . -name classification.cpp

lhj@DESKTOP-BINN7F8:~/myd-jx8mp-yocto/build-xwayland$ cp ./tmp/work/cortexa53-crypto-mx8mp-poky-linux/opencv/4.4.0.imx-r0/packages-split/opencv-src/usr/src/debug/opencv/4.4.0.imx-r0/git/samples/dnn/classification.cpp /mnt/e

lhj@DESKTOP-BINN7F8:~/myd-jx8mp-yocto/build-xwayland$

YOLO對(duì)象檢測(cè)

cd /usr/share/OpenCV/samples/bin

./example_dnn_object_detection --width=1024 --height=1024 --scale=0.00392 --input=dog416.png --rgb --zoo=models.yml yolo

e2ba8f74-c70d-11ec-8521-dac502259ad0.jpg


https://pjreddie.com/darknet/yolo/下載cfg和weights文件

cd/usr/share/OpenCV/samples/bin/

導(dǎo)入上面下載的文件

cp/usr/share/OpenCV/samples/data/dnn/object_detection_classes_yolov3.txt/usr/share/OpenCV/samples/bin/

cp/usr/share/opencv4/testdata/dnn/yolov3.cfg/usr/share/OpenCV/samples/bin/./example_dnn_object_detection --width=1024 --height=1024 --scale=0.00392 --input=dog416.png --rgb --zoo=models.yml yolo

OpenCV經(jīng)典機(jī)器學(xué)

cd /usr/share/OpenCV/samples/bin

線性SVM

./example_tutorial_introduction_to_svm

e2d1263a-c70d-11ec-8521-dac502259ad0.jpg

非線性SVM

./example_tutorial_non_linear_svms

e2e33c80-c70d-11ec-8521-dac502259ad0.jpg

PCA分析

./example_tutorial_introduction_to_pca ../data/pca_test1.jpg

e2fa2152-c70d-11ec-8521-dac502259ad0.jpg

邏輯回歸

./example_cpp_logistic_regression

e310c22c-c70d-11ec-8521-dac502259ad0.jpg

e323f9c8-c70d-11ec-8521-dac502259ad0.jpg

e3371f58-c70d-11ec-8521-dac502259ad0.jpg

聲明:本文內(nèi)容及配圖由入駐作者撰寫或者入駐合作網(wǎng)站授權(quán)轉(zhuǎn)載。文章觀點(diǎn)僅代表作者本人,不代表電子發(fā)燒友網(wǎng)立場(chǎng)。文章及其配圖僅供工程師學(xué)習(xí)之用,如有內(nèi)容侵權(quán)或者其他違規(guī)問題,請(qǐng)聯(lián)系本站處理。 舉報(bào)投訴
  • 嵌入式開發(fā)
    +關(guān)注

    關(guān)注

    18

    文章

    1177

    瀏覽量

    50248
收藏 人收藏
加入交流群
微信小助手二維碼

掃碼添加小助手

加入工程師交流群

    評(píng)論

    相關(guān)推薦
    熱點(diǎn)推薦

    對(duì) i.MX 8M Plus SoC 通過外部調(diào)試進(jìn)行 JTAG 調(diào)試的行為一些疑問,求解答

    我對(duì) i.MX 8M Plus SoC 通過外部調(diào)試進(jìn)行 JTAG 調(diào)試的行為一些疑問,我希望您能幫助我解決這個(gè)問題。 與我使用的其他
    發(fā)表于 04-23 06:04

    i.m.x 8M Plus linux 鏡像構(gòu)建錯(cuò)誤怎么解決?

    我正在使用 i.m.x 8M plus 處理器,我已經(jīng)按照所需的步驟構(gòu)建多媒體圖像。我面臨 bitbake 超時(shí)錯(cuò)誤。 遵循以下文檔作為參考。并附上錯(cuò)誤圖片以供參考。 使用的構(gòu)建命
    發(fā)表于 04-21 10:04

    無法將 FlexCan 與 i.MX 8M Plus EVK 一起使用,為什么?

    我正在使用\" i.MX 8M Plus EVK ”, and i have flashed on it the latest andro
    發(fā)表于 04-17 06:54

    如何下載 i.MX 8M Plus SDK?

    我正在使用 i.MX 8M Plus 處理器,并想下載適用于 Cortex-A53 的適當(dāng) SDK。我在產(chǎn)品頁面上找不到直接下載鏈接。 您能否引導(dǎo)我到正確的位置或提供下載
    發(fā)表于 04-16 07:46

    i.mx 8M Plus PMIC PCA9450CHN不工作是為什么?

    一個(gè) Phytec 的 imx 8m Plus 開發(fā)板。MIPI DSI 5V 線路和 GND 意外短路?,F(xiàn)在電路無法啟動(dòng)。從 phy
    發(fā)表于 04-10 12:54

    如何在“i.MX 8M Plus EVK Board”上的網(wǎng)絡(luò)瀏覽中打開.html文件?

    i am using “i.MX 8M Plus EVK ”,我已經(jīng)閃過了“l(fā)f_v6.12.34-2.1.0_images_imx
    發(fā)表于 04-10 08:56

    恩智浦全新i.MX 93W應(yīng)用處理器重磅發(fā)布

    恩智浦半導(dǎo)體宣布推出i.MX 93W應(yīng)用處理器,進(jìn)一步擴(kuò)展其i.MX 93產(chǎn)品系列。這款i.MX 93W片上系統(tǒng)(SoC)專為加速物理AI的部署而設(shè)計(jì),是首款將專用AI神經(jīng)
    的頭像 發(fā)表于 03-16 09:45 ?2517次閱讀

    請(qǐng)問qemu 可以模擬 i.MX 8M Plus 嗎?

    我們沒有i.MX 8M Plus,所以我想問一下 qemu 是否可以模擬i.MX 8M
    發(fā)表于 03-05 08:10

    探索FRDM - IMX8MPLUS開發(fā)板:開啟嵌入式開發(fā)新旅程

    MPLUS開發(fā)板就是這樣一款值得深入探索的產(chǎn)品。它為開發(fā)者提供了一個(gè)低成本、高性能的硬件平臺(tái),能夠幫助我們快速熟悉i.MX 8M Plus應(yīng)
    的頭像 發(fā)表于 12-24 11:00 ?548次閱讀

    恩智浦FRDM i.MX 8M Plus開發(fā)板詳解

    開發(fā)高級(jí)HMI應(yīng)用、計(jì)算機(jī)視覺系統(tǒng)以及邊緣AI項(xiàng)目時(shí),開發(fā)人員常常面臨一個(gè)共同挑戰(zhàn):如何在不依賴昂貴且復(fù)雜的開發(fā)平臺(tái)的前提下,獲得足夠的處理能力。這正是FRDM
    的頭像 發(fā)表于 11-18 15:07 ?1693次閱讀

    簡(jiǎn)單認(rèn)識(shí)NXP FRDM i.MX 93開發(fā)板

    FRDM i.MX 93開發(fā)板是一款入門級(jí)、緊湊型開發(fā)板,采用i.MX93應(yīng)用處理器。該配備板
    的頭像 發(fā)表于 11-17 09:45 ?1842次閱讀
    簡(jiǎn)單認(rèn)識(shí)NXP FRDM <b class='flag-5'>i.MX</b> 93<b class='flag-5'>開發(fā)板</b>

    恩智浦推出i.MX 952人工智能應(yīng)用處理器

    恩智浦半導(dǎo)體宣布推出i.MX 9系列的新成員——i.MX 952應(yīng)用處理器。該處理器專為AI視覺、人機(jī)接口(HMI)及座艙感知應(yīng)用而設(shè)計(jì),通過集成eIQ Neutron神經(jīng)
    的頭像 發(fā)表于 10-27 09:15 ?3758次閱讀

    恩智浦FRDM i.MX 8M Plus開發(fā)板上架

    i.MX 8M Plus應(yīng)用處理器集成2個(gè)或4個(gè)Arm Cortex-A53核、1個(gè)專用于實(shí)時(shí)控制的Arm Cortex-M7核,以及1個(gè)算
    的頭像 發(fā)表于 08-16 17:38 ?2453次閱讀
    恩智浦FRDM <b class='flag-5'>i.MX</b> <b class='flag-5'>8M</b> <b class='flag-5'>Plus</b><b class='flag-5'>開發(fā)板</b>上架

    米爾NXP i.MX 91核心發(fā)布,助力新一代入門級(jí)Linux應(yīng)用開發(fā)

    本帖最后由 blingbling111 于 2025-5-30 16:17 編輯 米爾電子基于與NXP長(zhǎng)期合作的嵌入式處理器開發(fā)經(jīng)驗(yàn),在i.MX 6和i.MX
    發(fā)表于 05-30 11:20

    NXP i.MX 91開發(fā)板#支持快速創(chuàng)建基于Linux?的邊緣器件

    NXP Semiconductors FRDM i.MX 91開發(fā)板設(shè)計(jì)用于評(píng)估i.MX 91應(yīng)用處理器,支持快速創(chuàng)建基于Linux ^?^ 的邊緣器件。該
    的頭像 發(fā)表于 05-19 10:55 ?3493次閱讀
    NXP <b class='flag-5'>i.MX</b> 91<b class='flag-5'>開發(fā)板</b>#支持快速創(chuàng)建基于Linux?的邊緣器件
    镇远县| 镇远县| 庆元县| 高要市| 许昌市| 聂拉木县| 阿巴嘎旗| 怀安县| 沈阳市| 平乡县| 福泉市| 平乡县| 天峻县| 蓬溪县| 三穗县| 青岛市| 手游| 康保县| 建始县| 阳新县| 贺州市| 巴里| 抚州市| 西畴县| 顺平县| 忻州市| 金门县| 敦化市| 锡林郭勒盟| 吴旗县| 五指山市| 昌邑市| 丰台区| 舒城县| 彭阳县| 东丰县| 桦甸市| 扬州市| 娄底市| 融水| 五峰|