0
  • 聊天消息
  • 系统消息
  • 评论与回复
登录后你可以
  • 下载海量资料
  • 学习在线课程
  • 观看技术视频
  • 写文章/发帖/加入社区
会员中心
创作中心

完善资料让更多小伙伴认识你,还能领取20积分哦,立即完善>

3天内不再提示

英特尔的开发板评测

英特尔物联网 来源:英特尔物联网 2025-01-24 09:37 次阅读
加入交流群
微信小助手二维码

扫码添加小助手

加入工程师交流群

作者:隋晓金

收到英特尔的开发板-小挪吒,正好手中也有oak相机,反正都是 OpenVINO一套玩意,进行评测一下,竟然默认是个Windows系统,刷机成Linux系统比较方便。

bcd806e4-d969-11ef-9310-92fbcf53809c.jpg

bcfdc334-d969-11ef-9310-92fbcf53809c.png

bd145590-d969-11ef-9310-92fbcf53809c.jpg

bd376d0a-d969-11ef-9310-92fbcf53809c.jpg

bd57ae26-d969-11ef-9310-92fbcf53809c.jpg

我们先刷个刷成Linux系统,测试比较方便,虽然Windows+Python代码也可以开发,搞点难度的Ubuntu+&++推理,同时还为了测试灰仔的ncnn,勉为其难,把系统刷掉,系统我们选择英特尔适配的22.04即可,确保和CPU的型号相同即可:

bd71af24-d969-11ef-9310-92fbcf53809c.png

使用motrix的下载,速度较快。然后使用rufus进行刻录优盘进行sd卡刻入,系统变成linux,就可以远程设置一ssh;系统界面如上。

系统需要安装官方的OpenVINO组件,使用英特尔端进行OpenVINO模型推理,当然也可使用ncnn/mnn/onnx,但原声组件更友好一些。

bd87494c-d969-11ef-9310-92fbcf53809c.png

先配置oak的环境,适配深度相机推理和测距,然后在开发板上推理关键点检测推理,演绎一下测试开发版性能,正好相机端的芯片也是英特尔使用OpenVINO框架,下面操作是开发板上配置一下相机使用的库环境:

ubuntu@ubuntu:~$ wget https://gitee.com/oakchina/depthai-core/releases/download/v2.28.0/depthai_2.28.0_amd64.deb
ubuntu@ubuntu:~$ sudo apt install -f
ubuntu@ubuntu:~$ sudo dpkg -i depthai_2.28.0_amd64.deb
(Reading database ... 164136 files and directories currently installed.)
Preparing to unpack depthai_2.28.0_amd64.deb ...
Unpacking depthai (2.28.0) over (2.28.0) ...
Setting up depthai (2.28.0) ...;

配置一下OpenVINO ,参考手册。这个主要后面写代码和转模型用。但是我用C++写代码,搞点有难度的事情。

https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html?PACKAGE=OPENVINO_BASE&VERSION=v_2022_3_2&ENVIRONMENT=DEV_TOOLS&OP_SYSTEM=LINUX&DISTRIBUTION=PIP;

链接,下面操作仍然在开发板上执行:

pip install openvino-dev==2022.3.2
storage.openvinotoolkit.org


ubuntu@ubuntu:~$ wget https://storage.openvinotoolkit.org/repositories/openvino/packages/2023.3/linux/l_openvino_toolkit_ubuntu22_2023.3.0.13775.ceeafaf64f3_x86_64.tgz
ubuntu@ubuntu:~$ sudo tar xf l_openvino_toolkit_ubuntu22_2023.0.0.10926.b4452d56304_x86_64.tgz.tgz.sha256.tgz -C /opt/intel/
ubuntu@ubuntu:~$ tar -zxvf l_openvino_toolkit_ubuntu22_2023.3.0.13775.ceeafaf64f3_x86_64.tgz
ubuntu@ubuntu:~$ mv l_openvino_toolkit_ubuntu22_2023.3.0.13775.ceeafaf64f3_x86_64 openvino_2023
ubuntu@ubuntu:~$ mv openvino_2023/ /opt/intel/
ubuntu@ubuntu:~$ cd /opt/intel/
ubuntu@ubuntu:~$ cd openvino_2023/
ubuntu@ubuntu:/opt/intel/openvino_2023$ vim ~/.bashrc
source /opt/intel/openvino_2023/setupvars.sh
ubuntu@ubuntu:~$ cd /opt/intel/openvino_2023/install_dependencies/
ubuntu@ubuntu:/opt/intel/openvino_2023/install_dependencies$ sudo -E ./install_openvino_dependencies.sh

下面操作在自己的宿主机器上执行,主要发现在开发板上的OpenVINO无法转相机的blob模型,但是这个低版本的OpenVINO库又无法开发板,应为2021.4支持系统ubuntu20.04版本和一下,开发板的版本是22.04系统版本过高。

先搞一下yolov5lite,这个官方给了方法和例子,简要叙述和附上,这我是在自己的宿主主机上做的ubuntu20.04 因为现在开发板版本过高,担心它的OpenVINO环境转的blob不一定能在oak相机上运行。

ubuntu@ubuntu:~/Downloads$ axel -n 100 https://registrationcenter-download.intel.com/akdlm/IRC_NAS/18096/l_openvino_toolkit_p_2021.4.689.tgz
ubuntu@ubuntu:~$ tar -zxvf l_openvino_toolkit_p_2021.4.689.tgz 
ubuntu@ubuntu:~/Downloads$ cd l_openvino_toolkit_p_2021.4.689/
ubuntu@ubuntu:~/Downloads/l_openvino_toolkit_p_2021.4.689$ sudo ./install_GUI.sh 
ubuntu@ubuntu:~$ cd /opt/intel/openvino_2021/install_dependencies/
ubuntu@ubuntu:/opt/intel/openvino_2021/install_dependencies$ sudo -E ./install_openvino_dependencies.sh 
ubuntu@ubuntu:/opt/intel/openvino_2021/bin$ sudo vim ~/.bashrc 

在末尾添加:

source /opt/intel/openvino_2021/bin/setupvars.sh
ubuntu@ubuntu:/opt/intel/openvino_2021/bin$ source ~/.bashrc 
[setupvars.sh] OpenVINO environment initialized
ubuntu@ubuntu:/opt/intel/openvino_2021/bin$ cd /opt//intel/openvino_2021/deployment_tools/model_optimizer//install_prerequisites/
ubuntu@ubuntu:/opt/intel/openvino_2021/deployment_tools/model_optimizer/install_prerequisites$ sudo ./install_prerequisites.sh

下载模型,进行转模型:

ubuntu@ubuntu:~$ git clone https://github.com/ppogg/YOLOv5-Lite

模型代码,参考oak官方代码:

bd996a3c-d969-11ef-9310-92fbcf53809c.png

转onnx模型和转OpenVINO模型 export_onnx.py见官方参考:

ubuntu@ubuntu:~/YOLOv5-Lite$ pip3 install -r requirements.txt
ubuntu@ubuntu:~/YOLOv5-Lite$ python3 export_onnx.py -w v5lite-e.pt -imgsz 640
Namespace(blob=False, convert_tool='blobconverter', img_size=[640, 640], 
input_model=PosixPath('/home/ubuntu/YOLOv5-Lite/v5lite-e.pt'), name='v5lite-e', 
opset=12, output_dir=PosixPath('/home/ubuntu/YOLOv5-Lite'), shaves=6, 
spatial_detection=False)
[18:12:38] INFO   YOLOv5  v1.5-16-g9d649a6 torch 2.4.1+cu121 CPU      
                                        
Fusing layers... 
[18:12:41] INFO   Model Summary: 167 layers, 781205 parameters, 0 gradients, 
          2.9 GFLOPS                         
 
      INFO   Starting ONNX export with onnx 1.16.1...          
      INFO   Starting to simplify ONNX...                
      INFO   ONNX export success, saved as:               
              /home/ubuntu/YOLOv5-Lite/v5lite-e.onnx       
      INFO   anchors:                          
              [10.0, 13.0, 16.0, 30.0, 33.0, 23.0, 30.0, 61.0,  
          62.0, 45.0, 59.0, 119.0, 116.0, 90.0, 156.0, 198.0, 373.0, 
          326.0]                           
      INFO   anchor_masks:                        
              {'side80': [0, 1, 2], 'side40': [3, 4, 5], 'side20':
          [6, 7, 8]}                         
      INFO   Anchors data export success, saved as:           
              /home/ubuntu/YOLOv5-Lite/v5lite-e.json       
      INFO   Export complete (3.61s).                  
ubuntu@ubuntu:~/YOLOv5-Lite$ python3 /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py --input_model v5lite-e.onnx --output_dir /home/ubuntu/YOLOv5-Lite/saved/FP16 --input_shape [1,3,640,640] --data_type FP16 --scale_values [255.0,255.0,255.0] --mean_values [0,0,0]
Model Optimizer arguments:
Common parameters:
  - Path to the Input Model:   /home/ubuntu/YOLOv5-Lite/v5lite-e.onnx
  - Path for generated IR:   /home/ubuntu/YOLOv5-Lite/saved/FP16
  - IR output name:   v5lite-e
  - Log level:   ERROR
  - Batch:   Not specified, inherited from the model
  - Input layers:   Not specified, inherited from the model
  - Output layers:   Not specified, inherited from the model
  - Input shapes:   [1,3,640,640]
  - Mean values:   [0,0,0]
  - Scale values:   [255.0,255.0,255.0]
  - Scale factor:   Not specified
  - Precision of IR:   FP16
  - Enable fusing:   True
  - Enable grouped convolutions fusing:   True
  - Move mean values to preprocess section:   None
  - Reverse input channels:   False
ONNX specific parameters:
  - Inference Engine found in:   /opt/intel/openvino_2021/python/python3.8/openvino
Inference Engine version:   2021.4.1-3926-14e67d86634-releases/2021/4
Model Optimizer version:   2021.4.1-3926-14e67d86634-releases/2021/4
[ WARNING ] 
Detected not satisfied dependencies:
  networkx: installed: 3.1, required: ~= 2.5
  numpy: installed: 1.23.5, required: < 1.20
 
Please install required versions of components or use install_prerequisites script
/opt/intel/openvino_2021.4.689/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_onnx.sh
Note that install_prerequisites scripts may install additional components.
/opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/onnx/parameter_ext.py:20: DeprecationWarning: `mapping.TENSOR_TYPE_TO_NP_TYPE` is now deprecated and will be removed in a future release.To silence this warning, please use `helper.tensor_dtype_to_np_dtype` instead.
  'data_type': TENSOR_TYPE_TO_NP_TYPE[t_type.elem_type]
/opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/analysis/boolean_input.py:13: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  nodes = graph.get_op_nodes(op='Parameter', data_type=np.bool)
/opt/intel/openvino_2021/deployment_tools/model_optimizer/mo/front/common/partial_infer/concat.py:36: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  mask = np.zeros_like(shape, dtype=np.bool)
[ WARNING ]  Const node '/model.8/Resize/Add_input_port_1/value338417277' returns shape values of 'float64' type but it must be integer or float32. During Elementwise type inference will attempt to cast to float32
[ WARNING ]  Const node '/model.12/Resize/Add_input_port_1/value341817280' returns shape values of 'float64' type but it must be integer or float32. During Elementwise type inference will attempt to cast to float32
[ WARNING ]  Changing Const node '/model.8/Resize/Add_input_port_1/value338418006' data type from float16 to  for Elementwise operation
[ WARNING ] Changing Const node '/model.12/Resize/Add_input_port_1/value341817580' data type from float16 to  for Elementwise operation
[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /home/ubuntu/YOLOv5-Lite/saved/FP16/v5lite-e.xml
[ SUCCESS ] BIN file: /home/ubuntu/YOLOv5-Lite/saved/FP16/v5lite-e.bin
[ SUCCESS ] Total execution time: 10.69 seconds. 
[ SUCCESS ] Memory consumed: 104 MB. 
It's been a while, check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2021_bu_IOTG_OpenVINO-2021-4-LTS&content=upg_all&medium=organic or on the GitHub*
ubuntu@ubuntu:~/YOLOv5-Lite$ 

转换模型

ubuntu@ubuntu:~$ find . -name "mo_onnx.py"
./.local/lib/python3.10/site-packages/openvino/tools/mo/mo_onnx.py
ubuntu@ubuntu:~$ python3 ./.local/lib/python3.10/site-packages/openvino/tools/mo/mo_onnx.py --input_model v5lite-e.onnx --output_dir /home/ubuntu/YOLOv5-Lite/saved/FP16 --input_shape [1,3,640,640] --data_type FP16 --scale_values [255.0,255.0,255.0] --mean_values [0,0,0]
[ WARNING ] Use of deprecated cli option --data_type detected. Option use in the following releases will be fatal.
Check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2023_bu_IOTG_OpenVINO-2022-3&content=upg_all&medium=organic or on https://github.com/openvinotoolkit/openvino
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /home/ubuntu/YOLOv5-Lite/saved/FP16/v5lite-e.xml
[ SUCCESS ] BIN file: /home/ubuntu/YOLOv5-Lite/saved/FP16/v5lite-e.bin
ubuntu@ubuntu:~$ pip3 install blobconverter
然后站blob


[setupvars.sh] OpenVINO environment initialized
ubuntu@ubuntu:~/YOLOv5-Lite$ cd /opt/intel/openvino_2021/deployment_tools/tools
ubuntu@ubuntu:/opt/intel/openvino_2021/deployment_tools/tools$ sudo chmod 777 compile_tool/
[sudo] password for ubuntu: 
ubuntu@ubuntu:/opt/intel/openvino_2021/deployment_tools/tools$ cd compile_tool/
ubuntu@ubuntu:/opt/intel/openvino_2021/deployment_tools/tools/compile_tool$ ./compile_tool -m /home/ubuntu/YOLOv5-Lite/saved/FP16/v5lite-e.xml -ip U8 -d MYRIAD -VPU_NUMBER_OF_SHAVES 4 -VPU_NUMBER_OF_CMX_SLICES 4
Inference Engine: 
  IE version ......... 2021.4.1
  Build ........... 2021.4.1-3926-14e67d86634-releases/2021/4
 
Network inputs:
  images : U8 / NCHW
Network outputs:
  output1_yolov5 : FP16 / NCHW
  output2_yolov5 : FP16 / NCHW
  output3_yolov5 : FP16 / NCHW
[Warning][VPU][Config] Deprecated option was used : VPU_MYRIAD_PLATFORM
Done. LoadNetwork time elapsed: 6529 ms
ubuntu@ubuntu:/opt/intel/openvino_2021/deployment_tools/tools/compile_tool$ ls
compile_tool README.md v5lite-e.blob

导出模型,先在oak相机上试试,这个整个模型都是在oak相机端进行推理和测距,只能说这个开发板是支持oak这种深度相机使用的。

bdba9ffe-d969-11ef-9310-92fbcf53809c.jpg

接着,来修改我们的代码,将模型放在开发板上使用OpenVINO推理,将测距功能仍然保持相机端推理,下面是使用clion远程调用开发板进行编译的代码,将深度相机OAK插在哪吒开发板的usb接口,将英特尔开发板插上显示器,然后进行相机调用,后续上传GitHub。

cmakelists.txt

cmake_minimum_required(VERSION 3.16)
project(demo)
set(CMAKE_CXX_STANDARD 11)
find_package(OpenCV REQUIRED)
#message(STATUS ${OpenCV_INCLUDE_DIRS})
#添加头文件
include_directories(${OpenCV_INCLUDE_DIRS})
include_directories(${CMAKE_SOURCE_DIR}/include)
include_directories(${CMAKE_SOURCE_DIR}/include/utility)
#链接Opencv库
find_package(depthai CONFIG REQUIRED)
add_executable(demo main.cpp include/utility/utility.cpp)
target_link_libraries(demo ${OpenCV_LIBS} depthai::opencv )
 

main.cpp

#include 
// Includes common necessary includes for development using depthai library
#include "depthai/depthai.hpp"
 
/*
The code is the same as for Tiny-yolo-V3, the only difference is the blob file.
The blob was compiled following this tutorial: https://github.com/TNTWEN/OpenVINO-YOLOV4
*/
 
 
static const std::vector labelMap = {
        "person",        "bicycle",      "car",           "motorbike",     "aeroplane",   "bus",         "train",       "truck",        "boat",
        "traffic light", "fire hydrant", "stop sign",     "parking meter", "bench",       "bird",        "cat",         "dog",          "horse",
        "sheep",         "cow",          "elephant",      "bear",          "zebra",       "giraffe",     "backpack",    "umbrella",     "handbag",
        "tie",           "suitcase",     "frisbee",       "skis",          "snowboard",   "sports ball", "kite",        "baseball bat", "baseball glove",
        "skateboard",    "surfboard",    "tennis racket", "bottle",        "wine glass",  "cup",         "fork",        "knife",        "spoon",
        "bowl",          "banana",       "apple",         "sandwich",      "orange",      "broccoli",    "carrot",      "hot dog",      "pizza",
        "donut",         "cake",         "chair",         "sofa",          "pottedplant", "bed",         "diningtable", "toilet",       "tvmonitor",
        "laptop",        "mouse",        "remote",        "keyboard",      "cell phone",  "microwave",   "oven",        "toaster",      "sink",
        "refrigerator",  "book",         "clock",         "vase",          "scissors",    "teddy bear",  "hair drier",  "toothbrush"};
 
static std::atomic syncNN{true};
 
 
int main() {
    // Create pipeline
    dai::Pipeline pipeline;
 
    // Define sources
    auto camRgb = pipeline.create();
    auto monoLeft = pipeline.create();
    auto monoRight = pipeline.create();
    auto stereo = pipeline.create();
    auto spatialDataCalculator = pipeline.create();
 
 
    // Properties
    camRgb->setPreviewSize(640, 640);
    camRgb->setBoardSocket(dai::RGB);
    camRgb->setResolution(dai::THE_1080_P);
    camRgb->setInterleaved(false);
    camRgb->setColorOrder(dai::RGB);
    camRgb->setPreviewKeepAspectRatio(false); //将调整视频大小以适应预览大小,对齐
 
    monoLeft->setBoardSocket(dai::LEFT);
    monoLeft->setResolution(dai::THE_720_P);
    monoRight->setBoardSocket(dai::RIGHT);
    monoRight->setResolution(dai::THE_720_P);
 
 
    stereo->setDefaultProfilePreset(dai::HIGH_ACCURACY);
    stereo->setLeftRightCheck(true);
    stereo->setDepthAlign(dai::RGB);
    stereo->setExtendedDisparity(true);
 
    dai::Point2f topLeft(0.4f, 0.4f);
    dai::Point2f bottomRight(0.6f, 0.6f);
 
    dai::SpatialLocationCalculatorConfigData config;
    config.depthThresholds.lowerThreshold = 100;
    config.depthThresholds.upperThreshold = 10000;
    config.roi = dai::Rect(topLeft, bottomRight);
 
    spatialDataCalculator->initialConfig.addROI(config);
    spatialDataCalculator->inputConfig.setWaitForMessage(false);
 
 
    // Network specific settings
    auto detectionNetwork = pipeline.create();
    detectionNetwork->setBlob("../v5lite-e.blob");
    detectionNetwork->setConfidenceThreshold(0.5);
    //Yolo specific parameters
    detectionNetwork->setNumClasses(80);
    detectionNetwork->setCoordinateSize(4);
    detectionNetwork->setAnchors({10,13,16,30,33,23,30,61,62,45,59,119,116,90,156,198,373,326});
    detectionNetwork->setAnchorMasks({{{"side80",{0, 1, 2}},{"side40",{3, 4, 5}},{"side20",{6, 7, 8}}}});
    detectionNetwork->setIouThreshold(0.5);
 
    // rgb输出
    auto xoutRgb = pipeline.create();
    xoutRgb->setStreamName("rgb");
 
    // depth输出
    auto xoutDepth = pipeline.create();
    xoutDepth->setStreamName("depth");
 
    // 测距模块数据输出
    auto xoutSpatialData = pipeline.create();
    xoutSpatialData->setStreamName("spatialData");
 
    // 测距模块配置输入
    auto xinSpatialCalcConfig = pipeline.create();
    xinSpatialCalcConfig->setStreamName("spatialCalcConfig");
 
 
    // Linking  preview 画布 video 实时分辨率
    camRgb->video.link(xoutRgb->input); //显示用video
    camRgb->preview.link(detectionNetwork->input); //推理用preview
    monoLeft->out.link(stereo->left);
    monoRight->out.link(stereo->right);
 
    spatialDataCalculator->passthroughDepth.link(xoutDepth->input);
    stereo->depth.link(spatialDataCalculator->inputDepth);
 
    spatialDataCalculator->out.link(xoutSpatialData->input);
    xinSpatialCalcConfig->out.link(spatialDataCalculator->inputConfig);
 
 
    // output
    auto xlinkParseOut = pipeline.create();
    xlinkParseOut->setStreamName("parseOut");
 
    auto xlinkoutOut = pipeline.create();
    xlinkoutOut->setStreamName("out");
 
    auto xlinkPassthroughOut = pipeline.create();
    xlinkPassthroughOut->setStreamName("passthrough");
 
 
    detectionNetwork->out.link(xlinkParseOut->input);
    detectionNetwork->passthrough.link(xlinkPassthroughOut->input);
 
 
    // Connect to device and start pipeline
    dai::Device device;
 
    device.setIrLaserDotProjectorBrightness(1000);
    device.setIrFloodLightBrightness(0);
    device.startPipeline(pipeline);
 
    // Output queues will be used to get the rgb frames and nn data from the outputs defined above
    auto detectQueue = device.getOutputQueue("parseOut",8,false);
    auto passthQueue = device.getOutputQueue("passthrough", 8, false);
    auto depthQueue = device.getOutputQueue("depth", 8, false);
    auto spatialCalcQueue = device.getOutputQueue("spatialData", 8, false);
    auto spatialCalcConfigInQueue = device.getInputQueue("spatialCalcConfig", 8, false);
    auto rgbQueue = device.getOutputQueue("rgb", 8, false);
 
    bool printOutputLayersOnce = true;
    auto color = cv::Scalar(0,255,0);
 
 
    std::vector detections;
    auto startTime = std::now();
    int counter = 0;
    float fps = 0;
    auto color2 = cv::Scalar(255, 255, 255);
    cv::Scalar color1 = cv::Scalar(0, 0, 255);
 
    while (true) {
        counter++;
        auto currentTime = std::now();
        auto elapsed = std::duration_cast>(currentTime - startTime);
        if(elapsed > std::seconds(1)) {
            fps = counter / elapsed.count();
            counter = 0;
            startTime = currentTime;
        }
 
        std::shared_ptr inRgb = rgbQueue->get();
        std::shared_ptr inDepth = depthQueue->get();
        std::shared_ptr inDet = detectQueue->get();
        std::shared_ptr ImgFrame = passthQueue->get();
 
        cv::Mat frame = inRgb->getCvFrame();
        cv::Mat src = ImgFrame->getCvFrame();
 
        cv::Mat depthFrameColor;
        cv::Mat depthFrame = inDepth->getFrame();
        cv::normalize(depthFrame, depthFrameColor, 255, 0, cv::NORM_INF, CV_8UC1);
        cv::equalizeHist(depthFrameColor, depthFrameColor);
        cv::applyColorMap(depthFrameColor, depthFrameColor, cv::COLORMAP_HOT);
 
        inDet = detectQueue->get();
        if(inDet) {
            detections = inDet->detections;
            for(auto& detection : detections) {
                int x1 = detection.xmin * src.cols;
                int y1 = detection.ymin * src.rows;
                int x2 = detection.xmax * src.cols;
                int y2 = detection.ymax * src.rows;
 
                uint32_t labelIndex = detection.label;
                std::string labelStr = std::to_string(labelIndex);
                if(labelIndex < labelMap.size()) {
                    labelStr = labelMap[labelIndex];
                }
                cv::putText(src, labelStr, cv::Point(x1 + 10, y1 + 20), cv::FONT_HERSHEY_TRIPLEX, 0.5, 255);
                std::stringstream confStr;
                confStr << std::fixed << std::setprecision(2) << detection.confidence * 100;
                cv::putText(src, confStr.str(), cv::Point(x1 + 10, y1 + 40), cv::FONT_HERSHEY_TRIPLEX, 0.5, 255);
                cv::rectangle(src, cv::Point(x1, y1), cv::Point(x2, y2)), color, cv::FONT_HERSHEY_SIMPLEX);
 
                // 1920*1080
                //cv::rectangle(depthFrameColor, cv::Point(x1, y1), cv::Point(x2, y2)), color, cv::FONT_HERSHEY_SIMPLEX);
                int top_left_x = detection.xmin * frame.cols;
                int top_left_y = detection.ymin * frame.rows;
                int bottom_right_x = detection.xmax * frame.cols;
                int bottom_right_y = detection.ymax * frame.rows;
 
                // 最值限定
                top_left_x = top_left_x < 0 ? 0 : top_left_x;
                bottom_right_x = bottom_right_x > frame.cols - 1 ? frame.cols - 1 : bottom_right_x;
                top_left_y = top_left_y < 0 ? 0 : top_left_y;
                bottom_right_y = bottom_right_y > frame.rows - 1 ? frame.rows - 1 : bottom_right_y;
 
                topLeft.x = top_left_x;
                topLeft.y = top_left_y;
                bottomRight.x = bottom_right_x;
                bottomRight.y = bottom_right_y;
 
                // 测距模块推送实际像素大小的ROI
                config.roi = dai::Rect(topLeft, bottomRight);
                dai::SpatialLocationCalculatorConfig cfg;
                cfg.addROI(config);
                spatialCalcConfigInQueue->send(cfg);
                std::vector spatialData = spatialCalcQueue->get()->getSpatialLocations();
 
                for (auto &depthData : spatialData) {
                    auto roi = depthData.config.roi;
                    roi = roi.denormalize(depthFrameColor.cols, depthFrameColor.rows);
                    auto xmin = (int) roi.topLeft().x;
                    auto ymin = (int) roi.topLeft().y;
                    auto xmax = (int) roi.bottomRight().x;
                    auto ymax = (int) roi.bottomRight().y;
 
                    // 最值限定
//                    xmin = xmin < 0 ? 0 : xmin;
//                    xmax = xmax > frame.cols - 1 ? frame.cols - 1 : xmax;
//                    ymin = ymin < 0 ? 0 : ymin;
//                    ymax = ymax > frame.rows - 1 ? frame.rows - 1 : ymax;
 
                    auto coords = depthData.spatialCoordinates;
                    auto distance = std::sqrt(coords.x * coords.x + coords.y * coords.y + coords.z * coords.z);
                    auto fontType = cv::FONT_HERSHEY_TRIPLEX;
 
                    std::stringstream rgb_depthX, depthX, rgb_depthX_;
                    rgb_depthX << "X: " << (int) coords.x << " mm";
                    rgb_depthX_.precision(2);
                    rgb_depthX_ << "dis: " << std::fixed << static_cast(distance) << " mm";
 
                    cv::rectangle(frame,
                                  cv::Point(xmin, ymin), cv::Point(xmax, ymax)),
                                  color,
                                  fontType);
 
                    cv::putText(frame, rgb_depthX_.str(), cv::Point(xmin + 10, ymin - 20),
                                fontType,
                                0.5, color1);
 
                    cv::putText(frame, rgb_depthX.str(), cv::Point(xmin + 10, ymin + 20),
                                fontType,
                                0.5, color1);
                    std::stringstream rgb_depthY, depthY;
                    rgb_depthY << "Y: " << (int) coords.y << " mm";
                    cv::putText(frame, rgb_depthY.str(), cv::Point(xmin + 10, ymin + 35),
                                fontType,
                                0.5, color1);
                    std::stringstream rgb_depthZ, depthZ;
                    rgb_depthZ << "Z: " << (int) coords.z << " mm";
                    cv::putText(frame, rgb_depthZ.str(), cv::Point(xmin + 10, ymin + 50),
                                fontType,
                                0.5, color1);
 
 
                    cv::rectangle(depthFrameColor,
                            cv::Point(xmin, ymin), cv::Point(xmax, ymax)),
                            color,
                            fontType);
                    depthX << "X: " << (int) coords.x << " mm";
                    cv::putText(depthFrameColor, depthX.str(), cv::Point(xmin + 10, ymin + 20),
                                fontType, 0.5, color1);
                    depthY << "Y: " << (int) coords.y << " mm";
                    cv::putText(depthFrameColor, depthY.str(), cv::Point(xmin + 10, ymin + 35),
                                fontType, 0.5, color1);
                    depthZ << "Z: " << (int) coords.z << " mm";
                    cv::putText(depthFrameColor, depthZ.str(), cv::Point(xmin + 10, ymin + 50),
                                fontType, 0.5, color1);
                }
            }
 
            std::stringstream fpsStr;
            fpsStr << "NN fps: " << std::fixed << std::setprecision(2) << fps;
//            printf("fps %f
",fps);
            cv::putText(src, fpsStr.str(), cv::Point(4, 22), cv::FONT_HERSHEY_TRIPLEX, 1,
                        cv::Scalar(0, 255, 0));
            cv::putText(frame, fpsStr.str(), cv::Point(4, 22), cv::FONT_HERSHEY_TRIPLEX, 1,
                        cv::Scalar(0, 255, 0));
 
            // Show the frame
//            cv::imshow("src", src);
            cv::imshow("frame", frame);
            cv::imwrite("frame.jpg", frame);
//            cv::imshow("depth", depthFrameColor);
            int key = cv::waitKey(1);
            if(key == 'q' || key == 'Q' || key == 27) {
                return 0;
            }
        }
    }
}

bdd81016-d969-11ef-9310-92fbcf53809c.jpg

然后将在相机端的推理代码踢掉,使用本地开发板哪吒进行推理,然后整体替换OpenVINO推理方式:

(1)先改个用编解码的方式获取相机,测距,使用CPU进行纯h264解码,纯CPU解码30帧左右,看样子还行,这小板子的CPU软解看着还凑合。

cmakelists.txt

cmake_minimum_required(VERSION 3.16)
project(demo)
set(CMAKE_CXX_STANDARD 11)
find_package(OpenCV REQUIRED)
#message(STATUS ${OpenCV_INCLUDE_DIRS})
#添加头文件
include_directories(${OpenCV_INCLUDE_DIRS})
include_directories(${CMAKE_SOURCE_DIR}/include)
include_directories(${CMAKE_SOURCE_DIR}/include/utility)
#链接Opencv库
find_package(depthai CONFIG REQUIRED)
add_executable(demo main.cpp include/utility/utility.cpp)
target_link_libraries(demo ${OpenCV_LIBS} depthai::opencv -lavformat -lavcodec -lswscale -lavutil -lz)

main.cpp

#include 
#include 
#include 
#include 
#include 
#include 
#include 
extern "C"
{
#include 
#include 
#include 
#include 
}
 
 
#include "utility.hpp"
 
#include "depthai/depthai.hpp"
 
using namespace std::chrono;
 
int main(int argc, char **argv) {
  dai::Pipeline pipeline;
  //定义
  auto cam = pipeline.create();
  cam->setBoardSocket(dai::RGB);
  cam->setResolution(dai::THE_1080_P);
  cam->setVideoSize(1920, 1080);
  cam->setFps(30);
  auto Encoder = pipeline.create();
  Encoder->setDefaultProfilePreset(cam->getVideoSize(), cam->getFps(),
                   dai::H265_MAIN);
 
 
  cam->video.link(Encoder->input);
 
  auto monoLeft = pipeline.create();
  auto monoRight = pipeline.create();
  auto stereo = pipeline.create();
  auto spatialLocationCalculator = pipeline.create();
 
  auto xoutDepth = pipeline.create();
  auto xoutSpatialData = pipeline.create();
  auto xinSpatialCalcConfig = pipeline.create();
  auto xoutRgb = pipeline.create();
  xoutDepth->setStreamName("depth");
  xoutSpatialData->setStreamName("spatialData");
  xinSpatialCalcConfig->setStreamName("spatialCalcConfig");
  xoutRgb->setStreamName("rgb");
 
  monoLeft->setResolution(dai::THE_400_P);
  monoLeft->setBoardSocket(dai::LEFT);
  monoRight->setResolution(dai::THE_400_P);
  monoRight->setBoardSocket(dai::RIGHT);
 
  stereo->setDefaultProfilePreset(dai::HIGH_ACCURACY);
  stereo->setLeftRightCheck(true);
  stereo->setExtendedDisparity(true);
  spatialLocationCalculator->inputConfig.setWaitForMessage(false);
 
 
  dai::SpatialLocationCalculatorConfigData config;
  config.depthThresholds.lowerThreshold = 200;
  config.depthThresholds.upperThreshold = 10000;
  config.roi = dai::Point2f( 0.1, 0.45), dai::Point2f(( 1) * 0.1, 0.55));
  spatialLocationCalculator->initialConfig.addROI(config);
 
  // Linking
  monoLeft->out.link(stereo->left);
  monoRight->out.link(stereo->right);
 
  spatialLocationCalculator->passthroughDepth.link(xoutDepth->input);
  stereo->depth.link(spatialLocationCalculator->inputDepth);
 
  spatialLocationCalculator->out.link(xoutSpatialData->input);
  xinSpatialCalcConfig->out.link(spatialLocationCalculator->inputConfig);
 
 
  //定义输出
  auto xlinkoutpreviewOut = pipeline.create();
  xlinkoutpreviewOut->setStreamName("out");
 
  Encoder->bitstream.link(xlinkoutpreviewOut->input);
 
 
  //结构推送相机
  dai::Device device(pipeline);
  device.setIrLaserDotProjectorBrightness(1000);
 
  //取帧显示
  auto outqueue = device.getOutputQueue("out", cam->getFps(), false);//maxsize 代表缓冲数据
  auto depthQueue = device.getOutputQueue("depth", 4, false);
  auto spatialCalcQueue = device.getOutputQueue("spatialData", 4, false);
 
  //auto videoFile = std::ofstream("video.h265", std::binary);
 
 
  int width = 1920;
  int height = 1080;
  AVCodec *pCodec = avcodec_find_decoder(AV_CODEC_ID_H265);
  AVCodecContext *pCodecCtx = avcodec_alloc_context3(pCodec);
  int ret = avcodec_open2(pCodecCtx, pCodec, NULL);
  if (ret < 0) {//打开解码器
        printf("Could not open codec.
");
        return -1;
    }
    AVFrame *picture = av_frame_alloc();
    picture->width = width;
  picture->height = height;
  picture->format = AV_PIX_FMT_YUV420P;
  ret = av_frame_get_buffer(picture, 1);
  if (ret < 0) {
        printf("av_frame_get_buffer error
");
        return -1;
    }
    AVFrame *pFrame = av_frame_alloc();
    pFrame->width = width;
  pFrame->height = height;
  pFrame->format = AV_PIX_FMT_YUV420P;
  ret = av_frame_get_buffer(pFrame, 1);
  if (ret < 0) {
        printf("av_frame_get_buffer error
");
        return -1;
    }
    AVFrame *pFrameRGB = av_frame_alloc();
    pFrameRGB->width = width;
  pFrameRGB->height = height;
  pFrameRGB->format = AV_PIX_FMT_RGB24;
  ret = av_frame_get_buffer(pFrameRGB, 1);
  if (ret < 0) {
        printf("av_frame_get_buffer error
");
        return -1;
    }
 
 
    int picture_size = av_image_get_buffer_size(AV_PIX_FMT_YUV420P, width, height,
                                                1);//计算这个格式的图片,需要多少字节来存储
    uint8_t *out_buff = (uint8_t *) av_malloc(picture_size * sizeof(uint8_t));
    av_image_fill_arrays(picture->data, picture->linesize, out_buff, AV_PIX_FMT_YUV420P, width,
             height, 1);
  //这个函数 是缓存转换格式,可以不用 以为上面已经设置了AV_PIX_FMT_YUV420P
  SwsContext *img_convert_ctx = sws_getContext(width, height, AV_PIX_FMT_YUV420P,
                         width, height, AV_PIX_FMT_RGB24, 4,
                         NULL, NULL, NULL);
  AVPacket *packet = av_packet_alloc();
 
  auto startTime = steady_clock::now();
  int counter = 0;
  float fps = 0;
  auto spatialCalcConfigInQueue = device.getInputQueue("spatialCalcConfig");
  while (true) {
    counter++;
    auto currentTime = steady_clock::now();
    auto elapsed = duration_cast>(currentTime - startTime);
    if (elapsed > seconds(1)) {
      fps = counter / elapsed.count();
      counter = 0;
      startTime = currentTime;
    }
 
 
 
 
    auto h265Packet = outqueue->get();
 
 
    //videoFile.write((char *) (h265Packet->getData().data()), h265Packet->getData().size());
 
    packet->data = (uint8_t *) h265Packet->getData().data();  //这里填入一个指向完整H264数据帧的指针
    packet->size = h265Packet->getData().size();    //这个填入H265 数据帧的大小
    packet->stream_index = 0;
    ret = avcodec_send_packet(pCodecCtx, packet);
    if (ret < 0) {
            printf("avcodec_send_packet 
");
            continue;
        }
        av_packet_unref(packet);
        int got_picture = avcodec_receive_frame(pCodecCtx, pFrame);
        av_frame_is_writable(pFrame);
        if (got_picture < 0) {
            printf("avcodec_receive_frame 
");
            continue;
        }
 
        sws_scale(img_convert_ctx, pFrame->data, pFrame->linesize, 0,
         height,
         pFrameRGB->data, pFrameRGB->linesize);
 
 
    cv::Mat mRGB(cv::Size(width, height), CV_8UC3);
    mRGB.data = (unsigned char *) pFrameRGB->data[0];
    cv::Mat mBGR;
    cv::cvtColor(mRGB, mBGR, cv::COLOR_RGB2BGR);
    std::stringstream fpsStr;
    fpsStr << "NN fps: " << std::fixed << std::setprecision(2) << fps;
        printf("fps %f
",fps);
        cv::putText(mBGR, fpsStr.str(), cv::Point(4, 22), cv::FONT_HERSHEY_TRIPLEX, 0.4,
                    cv::Scalar(0, 255, 0));
 
 
        config.roi = dai::Point2f(3 * 0.1, 0.45), dai::Point2f((3 + 1) * 0.1, 0.55));
        dai::SpatialLocationCalculatorConfig cfg;
        cfg.addROI(config);
        spatialCalcConfigInQueue->send(cfg);
 
    // auto inDepth = depthQueue->get();
    //cv::Mat depthFrame = inDepth->getFrame(); // depthFrame values are in millimeters
 
 
    auto spatialData = spatialCalcQueue->get()->getSpatialLocations();
    for(auto depthData : spatialData) {
      auto roi = depthData.config.roi;
      roi = roi.denormalize(mBGR.cols, mBGR.rows);
 
      auto xmin = static_cast(roi.topLeft().x);
      auto ymin = static_cast(roi.topLeft().y);
      auto xmax = static_cast(roi.bottomRight().x);
      auto ymax = static_cast(roi.bottomRight().y);
 
      auto coords = depthData.spatialCoordinates;
      auto distance = std::sqrt(coords.x * coords.x + coords.y * coords.y + coords.z * coords.z);
      auto color = cv::Scalar(0, 200, 40);
      auto fontType = cv::FONT_HERSHEY_TRIPLEX;
      cv::rectangle(mBGR, cv::Point(xmin, ymin), cv::Point(xmax, ymax)), color);
      std::stringstream depthDistance;
      depthDistance.precision(2);
      depthDistance << std::fixed << static_cast(distance / 1000.0f) << "m";
            cv::putText(mBGR, depthDistance.str(), cv::Point(xmin + 10, ymin + 20), fontType, 0.5, color);
        }
 
 
 
        cv::imshow("demo", mBGR);
        cv::imwrite("demo.jpg",mBGR);
 
        cv::waitKey(1);
 
 
    }
 
 
    return 0;
}

整个代码在哪吒开发板上进行解码,帧率达到30fps左右,还可以,图片就不上传了,大家可以自己评测,前提安装ffmpeg这个库。

(2)v8的模型转换和开发板上推理,这个地方一定要保证opset=11,如果是14是不可以的,模型转换可以在开发板上转换就行。

ubuntu@ubuntu:~$ pip install ultralytics转换代码

ubuntu@ubuntu:~$ cat convert_yolov8.py
from ultralytics import YOLO
 
# Load a model
model = YOLO("yolov8n.yaml") # build a new model from scratch
model = YOLO("yolov8n.pt") # load a pretrained model (recommended for training)
 
# Use the model
# model.train(data="coco8.yaml", epochs=3) # train the model
# metrics = model.val() # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg") # predict on an image
path = model.export(format="onnx") # export the model to ONNX format
path = model.export(format="openvino",opset=11) # export the model to ONNX format
cmkelists.txt


cmake_minimum_required(VERSION 3.12)
project(yolov8_openvino_example)
 
set(CMAKE_CXX_STANDARD 14)
 
find_package(OpenCV REQUIRED)
 
include_directories(
  ${OpenCV_INCLUDE_DIRS}
  /opt/intel/openvino_2023/runtime/include
)
 
add_executable(detect 
  main.cc
  inference.cc
)
 
target_link_libraries(detect
  ${OpenCV_LIBS}
   /opt/intel/openvino_2023/runtime/lib/intel64/libopenvino.so
)

测试代码使用官方的即可 ultralytics/examples/YOLOv8-OpenVINO-CPP-Inference at main · ultralytics/ultralytics · GitHub

be01e6a2-d969-11ef-9310-92fbcf53809c.jpg

(3)增加板子使用OpenVINO推理+板子CPU/ffmpeg解码+推流;oak相机测距代码就不添加了。

be2a7784-d969-11ef-9310-92fbcf53809c.png

发现这个模型还是比较重,添加到推理端有点小卡,先不加了,先用CPU进行编解码推流吧,测试目录和GitHub地址如下,效果图如下:

be50bcdc-d969-11ef-9310-92fbcf53809c.png

拉流设置命令

github:https://github.com/sxj731533730/OAK_Rtserver.git

参考资料:

[1] OAK相机如何将yoloV5lite模型转换成blob格式?_oak china yolov5模型转换-CSDN博客

https://blog.csdn.net/oakchina/article/details/129403986

[2]https://github.com/openvinotoolkit/openvino_notebooks/tree/latest/notebooks/pose-estimation-webcam

声明:本文内容及配图由入驻作者撰写或者入驻合作网站授权转载。文章观点仅代表作者本人,不代表电子发烧友网立场。文章及其配图仅供工程师学习之用,如有内容侵权或者其他违规问题,请联系本站处理。 举报投诉
  • 英特尔
    +关注

    关注

    61

    文章

    10275

    浏览量

    179244
  • 开发板
    +关注

    关注

    25

    文章

    6122

    浏览量

    113149

原文标题:开发者实战|英特尔开发板试用:结合oak深度相机进行评测

文章出处:【微信号:英特尔物联网,微信公众号:英特尔物联网】欢迎添加关注!文章转载请注明出处。

收藏 人收藏
加入交流群
微信小助手二维码

扫码添加小助手

加入工程师交流群

    评论

    相关推荐
    热点推荐

    01 Studio K230开发板开箱评测

    Studio K230开发板开箱评测
    的头像 发表于 06-28 14:26 2557次阅读
    01 Studio K230<b class='flag-5'>开发板</b>开箱<b class='flag-5'>评测</b>

    【免费试用】开发板评测大赛开启!OH 、RISC-V、Rockchip顶级开发板等你试用~

    技术人的狂欢,开发者的盛宴!2025年最值得期待的硬核赛事——电子发烧友开发板评测大赛正式启动!无论你是开源生态的探索者、芯片架构的极客,还是物联网领域的创新达人,本次大赛三大赛
    的头像 发表于 06-05 08:05 745次阅读
    【免费试用】<b class='flag-5'>开发板</b><b class='flag-5'>评测</b>大赛开启!OH 、RISC-V、Rockchip顶级<b class='flag-5'>开发板</b>等你试用~

    龙TL3562-MiniEVM开发板免费评测 #开发板 #评测体验 

    开发板
    ElecFans小喇叭
    发布于 :2025年05月30日 15:24:12

    评测试用】合众HZ-T536开发板免费试用体验

    评测试用】合众HZ-T536开发板免费试用体验
    的头像 发表于 05-27 08:05 678次阅读
    【<b class='flag-5'>评测</b>试用】合众HZ-T536<b class='flag-5'>开发板</b>免费试用体验

    英特尔发布全新GPU,AI和工作站迎来新选择

    英特尔推出面向准专业用户和AI开发者的英特尔锐炫Pro GPU系列,发布英特尔® Gaudi 3 AI加速器机架级和PCIe部署方案   2025 年 5 月 19 日,北京 ——今日
    发表于 05-20 11:03 1664次阅读

    开发板评测大赛开启!顶级开发板等你来战!

    技术人的狂欢,开发者的盛宴!2025年最值得期待的硬核赛事——电子发烧友开发板评测大赛正式启动!无论你是开源生态的探索者、芯片架构的极客,还是物联网领
    的头像 发表于 05-20 08:07 370次阅读
    <b class='flag-5'>开发板</b><b class='flag-5'>评测</b>大赛开启!顶级<b class='flag-5'>开发板</b>等你来战!

    英特尔宣布裁员20% 或2万人失业

    据外媒彭博社的报道,在当地时间4月24日,英特尔宣布裁员计划,比例超20%。按照截至2024年底英特尔在全球拥有108900名员工来计算的话,预计此次裁员将波及大约2.2万名员工。旨在精简管理,并重
    的头像 发表于 04-25 17:34 482次阅读

    为什么无法检测到OpenVINO™工具套件中的英特尔®集成图形处理单元?

    在 Ubuntu* Desktop 22.04 上安装了 英特尔® Graphics Driver 版本并OpenVINO™ 2023.1。 运行 python 代码: python -c
    发表于 03-05 08:36

    请问OpenVINO™工具套件英特尔®Distribution是否与Windows® 10物联网企业版兼容?

    无法在基于 Windows® 10 物联网企业版的目标系统上使用 英特尔® Distribution OpenVINO™ 2021* 版本推断模型。
    发表于 03-05 08:32

    英特尔®独立显卡与OpenVINO™工具套件结合使用时,无法运行推理怎么解决?

    使用英特尔®独立显卡与OpenVINO™工具套件时无法运行推理
    发表于 03-05 06:56

    英特尔®NCS2运行演示时“无法在启动后找到启动设备”怎么解决?

    使用 英特尔® NCS2 运行 推断管道演示脚本 。 在首次尝试中成功运行演示应用程序。 从第二次尝试开始遇到错误:E: [ncAPI] [ 150951] [security_barrie
    发表于 03-05 06:48

    英特尔任命王稚聪担任中国区副董事长

    英特尔公司宣布,任命王稚聪先生担任新设立的英特尔中国区副董事长一职。王稚聪将全面负责管理英特尔中国的业务运营,直接向英特尔公司高级副总裁、英特尔
    的头像 发表于 03-03 10:54 876次阅读

    英特尔带您解锁云上智算新引擎

    在近日举办的2024火山引擎FORCE原动力大会上,英特尔与火山引擎联合发布基于英特尔 至强 6 性能核处理器的第四代服务器实例,以打造弹性算力底座的产品化实践。同时,英特尔也携手扣子共同推出Coze-AIPC端云协同智能体
    的头像 发表于 12-23 14:05 1222次阅读

    基于英特尔开发板开发ROS应用

    随着智能机器人技术的快速发展,越来越多的研究者和开发者开始涉足这一充满挑战和机遇的领域。哪吒开发板,作为一款高性能的机器人开发平台,凭借其强大的计算能力和丰富的接口,为机器人爱好者和专业人士提供了一个理想的实验和
    的头像 发表于 12-20 10:54 2189次阅读
    基于<b class='flag-5'>英特尔</b><b class='flag-5'>开发板</b><b class='flag-5'>开发</b>ROS应用

    英特尔推出全新英特尔锐炫B系列显卡

    英特尔锐炫B580和B570 GPU以卓越价值为时新游戏带来超凡表现。   > 今日,英特尔发布全新英特尔锐炫 B系列显卡(代号Battlemage)。英特尔锐炫 B580和B570
    的头像 发表于 12-07 10:16 1913次阅读
    <b class='flag-5'>英特尔</b>推出全新<b class='flag-5'>英特尔</b>锐炫B系列显卡