Skip to content

Commit a0a05ee

Browse files
author
SzabolcsGergely
committed
Merge remote-tracking branch 'origin/main' into HEAD
2 parents af11974 + 3ebda0d commit a0a05ee

24 files changed

+415
-44
lines changed

CMakeLists.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ endif()
2929

3030
# Pybindings project
3131
set(TARGET_NAME depthai)
32-
project(depthai VERSION "0") # revision of bindings [depthai-core].[rev]
32+
project(depthai VERSION "1") # revision of bindings [depthai-core].[rev]
3333

3434
# Set default build type depending on context
3535
set(default_build_type "Release")

docs/source/components/nodes.rst

Lines changed: 49 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -3,32 +3,68 @@
33
Nodes
44
=====
55

6-
Nodes are the building blocks when populating the :ref:`Pipeline`. Each node provides a specific functionality on the DepthaI, a set of configurable
6+
Nodes are the building blocks when populating the :ref:`Pipeline`. Each node provides a specific functionality on the DepthAI, a set of configurable
77
properties and inputs/outputs. After you create a node on a pipeline, you can also configure it as desired and link it to other nodes.
88

9-
.. rubric:: Outputs and inputs
9+
On the table of contents (left side of the page) all nodes are listed under the :code:`Node` entry. You can click on them to find out more.
10+
11+
.. rubric:: Inputs and outputs
12+
13+
Each node can have zero, one or multiple inputs and outputs. For example :ref:`SystemLogger` node has no inputs and 1 output and :ref:`EdgeDetector` has
14+
2 inputs and 1 output, as shown below. :ref:`Script` node can have any number of inputs/ouputs.
15+
16+
.. code-block::
17+
18+
┌───────────────────┐
19+
inputImage │ │
20+
──────────────►│ │
21+
│ │ outputImage
22+
│ EdgeDetector ├───────────►
23+
inputConfig │ │
24+
──────────────►│ │
25+
│ │
26+
└───────────────────┘
27+
EdgeDetector node has 2 inputs and 1 output
1028
11-
Each node can have zero, one or multiple inputs and outputs. For example :ref:`SystemLogger` node has no inputs and 1 output and :ref:`StereoDepth` has
12-
2 inputs and 6 outputs.
1329
1430
.. rubric:: Node input
1531

16-
Node input queue is a queue for the :ref:`Messages`. It can be linked with other node's output (that's how you link up nodes). Node inputs are
17-
configurable - with :code:`input.setBlocking(bool)` and :code:`input.setQueueSize(num)`. Default behaviour is blocking and a queue size of 8 messages.
32+
Node input queue is a queue for :ref:`Messages`. It can be linked with other node's output (that's how you link up nodes). Node inputs are
33+
configurable - with :code:`input.setBlocking(bool)` and :code:`input.setQueueSize(num)`, eg. :code:`edgeDetector.inputImage.setQueueSize(10)`.
1834
If the input queue fills up, behavior of the input depends on blocking attribute.
19-
If blocking is enabled, new messages will be discarded until user gets the old messages. If blocking is disabled, new messages will push out old messages.
35+
36+
Let's say we have linked :ref:`ColorCamera` :code:`preview` output with :ref:`NeuralNetwork` :code:`input` input.
37+
38+
.. code-block::
39+
40+
┌─────────────┐ ┌───────────────┐
41+
│ │ │ │
42+
│ │ preview input │ │
43+
│ ColorCamera ├───────────────────►│ NeuralNetwork │
44+
│ │ [ImgFrame] │ │
45+
│ │ │ │
46+
└─────────────┘ └───────────────┘
47+
48+
If **input is set to blocking mode**, and input queue fills up, no new messages from ColorCamera will be able to enter the input queue. This means ColorCamera
49+
will block and wait with sending its messages until it can push the message to the queue of NeuralNetwork input. If ColorCamera preview is connected to
50+
multiple inputs, the same behavior implies, with the messages being pushed sequentially to each input.
51+
52+
.. warning::
53+
Depending on pipeline configuration, this can sometimes lead to pipeline freezing, if some blocking input isn't being properly consumed.
54+
55+
If **blocking is disabled**, new messages will push out old messages. This eliminates the risk of pipeline freezing, but can result in dropped messages (eg. :ref:`ImgFrame`).
2056

2157
.. rubric:: Node output
2258

23-
Node outputs :ref:`Messages`. There is no output queue per se, but some nodes do have a configurable output message pool.
24-
Output message pool is a reserved memory region (to reduce memory fragmentation) that holds output messages.
25-
After the node creates an output message (for example :ref:`ImgFrame`), it will send it to other nodes as specified when linking the inputs/outputs of the node.
59+
Node outputs :ref:`Messages`. Some nodes have a configurable output message pool. **Output message pool** is a reserved memory region (to reduce memory
60+
fragmentation) that holds output messages. After the node creates an output message (for example :ref:`ImgFrame`), it will send it to other nodes as
61+
specified when linking the inputs/outputs of the node.
2662
Currently, some nodes (:ref:`VideoEncoder`, :ref:`NeuralNetwork`, :ref:`ImageManip`, :ref:`XLinkIn`) can have the pool size configured.
2763
The size of the pool specifies how many messages can be created and sent out while other messages are already
28-
somewhere in the pipeline. When all the messages from pool are sent out and none yet returned, that is when the node will block and
29-
wait until a message is returned to the pool (not used by any node in the pipeline anymore)
64+
somewhere in the pipeline.
3065

31-
On the table of contents (left side of the page) all nodes are listed under the :code:`Node` entry. You can click on them to find out more.
66+
.. warning::
67+
When all the messages from pool are sent out and none yet returned, that's when the node will block (freeze) and wait until a message is released (not used by any node in the pipeline).
3268

3369
.. toctree::
3470
:maxdepth: 0

docs/source/components/nodes/color_camera.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -59,6 +59,8 @@ Click `here <https://en.wikipedia.org/wiki/Image_processor>`__ for more informat
5959

6060
**Image Post-Processing** converts YUV420 planar frames from the **ISP** into :code:`video`/:code:`preview`/:code:`still` frames.
6161

62+
When setting sensor resolution to 12MP and using :code:`video`, you will get 4K video output. 4K frames are cropped from 12MP frames (not downsampled).
63+
6264
Usage
6365
#####
6466

Lines changed: 91 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,91 @@
1+
SPIIn
2+
=====
3+
4+
SPIIn node is used for **receiving data** that was sent **from a MCU** (via SPI). `OAK-IOT <https://docs.luxonis.com/projects/hardware/en/latest/#iot-designs>`__ devices
5+
have an on-board ESP32 that is connected to the VPU (MyriadX) via SPI. You can find demos `here <https://github.com/luxonis/depthai-experiments/tree/master/gen2-spi>`__.
6+
7+
This allows you for example to control eg. :ref:`ColorCamera` or :ref:`ImageManip` from the MCU or send a :ref:`Buffer` of data from the MCU to a :ref:`Script` node.
8+
9+
:ref:`SPIOut` is used for sending data from the VPU to a MCU (via SPI).
10+
11+
How to place it
12+
###############
13+
14+
.. tabs::
15+
16+
.. code-tab:: py
17+
18+
pipeline = dai.Pipeline()
19+
spi = pipeline.create(dai.node.SPIIn)
20+
21+
.. code-tab:: c++
22+
23+
dai::Pipeline pipeline;
24+
auto spi = pipeline.create<dai::node::SPIIn>();
25+
26+
27+
Inputs and Outputs
28+
##################
29+
30+
.. code-block::
31+
32+
┌─────────────┐
33+
SPI │ │
34+
(from MCU) | │ out
35+
----------►│ SPIIn ├─────────►
36+
│ │
37+
│ │
38+
└─────────────┘
39+
40+
**Message types**
41+
42+
- :code:`out` - :code:`Any`
43+
44+
Usage
45+
#####
46+
47+
.. tabs::
48+
49+
.. code-tab:: py
50+
51+
pipeline = dai.Pipeline()
52+
spi = pipeline.create(dai.node.SPIIn)
53+
54+
spi.setStreamName("control")
55+
spi.setBusId(0)
56+
57+
.. code-tab:: c++
58+
59+
dai::Pipeline pipeline;
60+
auto spi = pipeline.create<dai::node::SPIIn>();
61+
62+
spi->setStreamName("control");
63+
spi->setBusId(0);
64+
65+
Examples of functionality
66+
#########################
67+
68+
- `SPI demos (host side) <https://github.com/luxonis/depthai-experiments/tree/master/gen2-spi>`__
69+
- `ESP32 code demos <https://github.com/luxonis/esp32-spi-message-demo>`__
70+
71+
Reference
72+
#########
73+
74+
.. tabs::
75+
76+
.. tab:: Python
77+
78+
.. autoclass:: depthai.node.SPIIn
79+
:members:
80+
:inherited-members:
81+
:noindex:
82+
83+
.. tab:: C++
84+
85+
.. doxygenclass:: dai::node::SPIIn
86+
:project: depthai-core
87+
:members:
88+
:private-members:
89+
:undoc-members:
90+
91+
.. include:: ../../includes/footer-short.rst

docs/source/components/nodes/spi_out.rst

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,10 @@
11
SPIOut
22
======
33

4-
SPIOut node is used to send data through to a MCU via SPI. `LUX-ESP32 <https://docs.luxonis.com/en/gen2/pages/products/bw1092/>`__ module has integrated an
5-
integrated ESP32 connected to the MyriadX via SPI. You can find demos `here <https://github.com/luxonis/depthai-experiments/tree/master/gen2-spi>`__.
4+
SPIOut node is used for **sending data to a MCU** (via SPI). `OAK-IOT <https://docs.luxonis.com/projects/hardware/en/latest/#iot-designs>`__ devices
5+
have an on-board ESP32 that is connected to the VPU (MyriadX) via SPI. You can find demos `here <https://github.com/luxonis/depthai-experiments/tree/master/gen2-spi>`__.
66

7+
:ref:`SPIIn` is used for receiving data from the MCU (via SPI).
78

89
How to place it
910
###############

docs/source/install.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -167,7 +167,7 @@ Last step is to edit :code:`.bashrc` with the line:
167167

168168
.. code-block:: bash
169169
170-
echo "export OPENBLAS_CORETYPE=AMRV8" >> ~/.bashrc
170+
echo "export OPENBLAS_CORETYPE=ARMV8" >> ~/.bashrc
171171
172172
173173
Navigate to the folder with :code:`depthai` examples folder, run :code:`python install_requirements.py` and then run :code:`python rgb_preview.py`.

docs/source/samples/FeatureTracker/feature_tracker.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Demo
1010
.. raw:: html
1111

1212
<div style="position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; height: auto;">
13-
<iframe src="https://www.youtube.com/watch?v=0WonOa0xmDY" frameborder="0" allowfullscreen style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;"></iframe>
13+
<iframe src="https://www.youtube.com/embed/0WonOa0xmDY" frameborder="0" allowfullscreen style="position: absolute; top: 0; left: 0; width: 100%; height: 100%;"></iframe>
1414
</div>
1515

1616
Setup
Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
Script forward frames
2+
=====================
3+
4+
This example shows how to use :ref:`Script` node to forward (demultiplex) frames to two different outputs - in this case directly to two :ref:`XLinkOut` nodes.
5+
Script also changes exposure ratio for each frame, which results in two streams, one lighter and one darker.
6+
7+
Demo
8+
####
9+
10+
.. image:: https://user-images.githubusercontent.com/18037362/138553268-c2bd3525-c407-4b8e-bd0d-f87f13b8546d.png
11+
12+
Setup
13+
#####
14+
15+
.. include:: /includes/install_from_pypi.rst
16+
17+
Source code
18+
###########
19+
20+
.. tabs::
21+
22+
.. tab:: Python
23+
24+
Also `available on GitHub <https://github.com/luxonis/depthai-python/blob/main/examples/Script/script_forward_frames.py>`__
25+
26+
.. literalinclude:: ../../../../examples/Script/script_forward_frames.py
27+
:language: python
28+
:linenos:
29+
30+
.. tab:: C++
31+
32+
Also `available on GitHub <https://github.com/luxonis/depthai-core/blob/main/examples/Script/script_forward_frames.cpp>`__
33+
34+
.. literalinclude:: ../../../../depthai-core/examples/Script/script_forward_frames.cpp
35+
:language: cpp
36+
:linenos:
37+
38+
.. include:: /includes/footer-short.rst
Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
Disparity encoding
2+
==================
3+
4+
This example encodes disparity output of the :ref:`StereoDepth`. Note that you shouldn't enable subpixel mode, as UINT16
5+
isn't supported by the :ref:`VideoEncoder`.
6+
7+
Pressing Ctrl+C will stop the recording and then convert it using ffmpeg into an mp4 to make it
8+
playable. Note that ffmpeg will need to be installed and runnable for the conversion to mp4 to succeed.
9+
10+
Be careful, this example saves encoded video to your host storage. So if you leave it running,
11+
you could fill up your storage on your host.
12+
13+
.. rubric:: Similiar samples:
14+
15+
- :ref:`RGB Encoding`
16+
- :ref:`RGB & Mono Encoding`
17+
18+
Demo
19+
####
20+
21+
.. image:: https://user-images.githubusercontent.com/18037362/138722539-649aef24-266f-4e83-b264-6f80ae896f5b.png
22+
23+
Setup
24+
#####
25+
26+
.. include:: /includes/install_from_pypi.rst
27+
28+
Source code
29+
###########
30+
31+
.. tabs::
32+
33+
.. tab:: Python
34+
35+
Also `available on GitHub <https://github.com/luxonis/depthai-python/blob/main/examples/VideoEncoder/disparity_encoding.py>`__
36+
37+
.. literalinclude:: ../../../../examples/VideoEncoder/disparity_encoding.py
38+
:language: python
39+
:linenos:
40+
41+
.. tab:: C++
42+
43+
Also `available on GitHub <https://github.com/luxonis/depthai-core/blob/main/examples/VideoEncoder/disparity_encoding.cpp>`__
44+
45+
.. literalinclude:: ../../../../depthai-core/examples/VideoEncoder/disparity_encoding.cpp
46+
:language: cpp
47+
:linenos:
48+
49+
.. include:: /includes/footer-short.rst

0 commit comments

Comments
 (0)