Setup your working environment¶
apt install ffmpeg # required to see the visualize the video stream apt install evince # require to visualize postscript files # mkdir linux && cd linux #(if required) # Run docker sudo docker run -i -t --volume $(realpath linux):/linux -w /linux helenfornazier/slim-v4l:firsttry # git clone git://linuxtv.org/media_tree.git #(if required) # We need to add the media git tree cd staging git remote add media git://linuxtv.org/media_tree.git git fetch media git checkout -b media-master media/master # virtme-configkernel --defconfig #(if required) # Enable virtual drivers vimc and vivid modules make menuconfig # Device Drivers -> Multimedia support -> Cameras/video grabbers support # Device Drivers -> Multimedia support -> Media Controller API # Device Drivers -> Multimedia support -> V4L2 sub-device userspace API # Device Drivers -> Multimedia support -> Media test drivers -> Virtual Video Test Driver # Device Drivers -> Multimedia support -> Media test drivers -> Virtual Media Controler Driver (VIMC) # Compile and install the modules make -j8 make modules_install # Run the kernel virtme # NOTE: replace your kernel version, or get it from `make kernelrelease` virtme-run --rwdir /linux --kdir=. --mdir=/lib/modules/5.3.0-rc4+/
Test modules and stream¶
modprobe vivid ls /dev/video* v4l2-ctl -d /dev/video0 -v pixelformat=YUYV,width=640,height=360 v4l2-ctl --stream-mmap --stream-count=30 -d /dev/video0 --stream-to=test.raw # In your host computer (outside virtme outside docker) execute: # NOTE: you don't need to exit virtme or docker # NOTE: ffmpeg package is required ffplay -f rawvideo -pixel_format yuyv422 -video_size "640x360" test.raw rmmod vivid modprobe vimc media-ctl -d /dev/media0 -V '"Sensor A":0[fmt:SBGGR8_1X8/640x480]' media-ctl -d /dev/media0 -V '"Debayer A":0[fmt:SBGGR8_1X8/640x480]' media-ctl -d /dev/media0 -V '"Sensor B":0[fmt:SBGGR8_1X8/640x480]' media-ctl -d /dev/media0 -V '"Debayer B":0[fmt:SBGGR8_1X8/640x480]' v4l2-ctl -d /dev/video2 -v width=1920,height=1440 v4l2-ctl -d /dev/video0 -v pixelformat=BA81 v4l2-ctl -d /dev/video1 -v pixelformat=BA81 v4l2-ctl --stream-mmap --stream-count=10 -d /dev/video2 --stream-to=test.raw # In your host computer: ffplay -loglevel warning -v info -f rawvideo -pixel_format rgb24 -video_size "1920x1440" test.raw rmmod vimc
If you complie v4l-utils your self, you should be able to address the nodes by name, example:
v4l2-ctl -z platform:vimc -d "RGB/YUV Capture" -v width=1920,height=1440
v4l2-ctl --help is your friend, and interpreting docs and helpers is a good skill to develop.
1) Use v4l2-ctl to read the current format i.e, the resolution of the image
Tip: the nodes we are using to capture images of type /dev/videoX are called capture devices. Use the command
v4l2-ctl --help-vidcap to see how you can manipulate these type of nodes
2) Use v4l2-ctl to change the resolution, and use the above command to get a raw video with this new resolution and try to visualize if with ffplay
3) Use v4l2-ctl to list the supported pixel formats (i.e, how the pixels will be placed in memory), change the pixel format and generate another video from it. execute ffplay -pix_fmts to list the supported pixel formats, and try to visualize this new video with ffplay
v4l2-ctl -d0 -l to list which controls the device provide and find the brigtness. Now use this information to get the current brightness value
5) Change the brightness to 240 and inspect the output video
6) Change the test_pattern control to 3, what do you see? Test other numbers
7) Change the resolution to a really big value, read the resolution again, what happen?
8) Do the same thing, but this time, use strace and search for S_FMT ioctl. Compare to the docs of S_FMT (try finding the docs in linuxtv.org, navigating through the docs is a good skill to develop)
9) Use v4l2-ctl to list the possible framesizes for the current pixelformat.
10) Test the driver using
v4l2-compliance -d /dev/video0.
You should see something like
Total: 107, Succeeded: 103, Failed: 4, Warnings: 0
11) Use v4l2-compliance to test streaming (check
1) Lets try to understand what a topology is, execute:
media-ctl --print-dot > topology.dot dot -Tps -o topology.ps topology.dot
Note: You should see green blocks, if you only see yellow blocks, it means you are in vivid instead of vimc.
2) Try to understand what these blocks following the docs below, what the sensor do? What is a debayer?
3) yellow boxes are video devices, and the green boxes are sub-devices, what is the main difference between them?
4) What is a pad, an entity and link? And what is the difference between a sink and a source pad?
media-controller doc, check the media model.
5) Use v4l2-util or media-ctl tools (you choose, both provide some overlapping functionalities), to query the image format outputed by "Sensor A"
Tip for v4l-utils,
v4l2-ctl --help-subdev is your friend.
6) Now we are going to "cheat" and use
media-ctl -p to see the formats of all pads
7) Change the format in "Sensor A" to be 300x300 and try to start streaming, what happen and why?
8) Adjust all the formats in the topology so it can work with "Sensor A" outputing image with a resolution of 300x300
9) Use media-ctl to disable the link "Debayer A"->"Scaler" and enable "Debayer B"->"Scaler"
10) Stream image from "Raw Capture 0", what is the difference?
11) Test the driver using
v4l2-compliance -m /dev/media0.
Some other tools¶
- Setup cscope and ctags to navigate faster in the code:
make tags cscope
Make sure you can easily find a file or a definition of a function.
- Install libcamera
You should be able to stream using the
cam command. Example:
cam -c "ov5647 4-0036" -C --file="/tmp/libcamframe#.data" -s width=1280,height=960
- Apply patches using git pw
linux> git pw series list "Collapse vimc into single monolithic driver" +--------+-------------+---------------------------------------------+-----------+----------------------------------------+ | ID | Date | Name | Version | Submitter | |--------+-------------+---------------------------------------------+-----------+----------------------------------------| | 162623 | 10 days ago | Collapse vimc into single monolithic driver | 3 | Shuah Khan (firstname.lastname@example.org) | | 160475 | 15 days ago | Collapse vimc into single monolithic driver | 2 | Shuah Khan (email@example.com) | | 158045 | 21 days ago | Collapse vimc into single monolithic driver | 1 | Shuah Khan (firstname.lastname@example.org) | +--------+-------------+---------------------------------------------+-----------+----------------------------------------+ git pw series apply 162623
Playing with vimc code¶
Make sure you have cscope and ctags to make it easier to navigate and find the refered functions.
Tip: to go to a function or struct definition in vim, type
:cs f g func_or_struct_name, to return to the previor location type
vimc-core.c, modify the list
ent_links to create a simple topology with only two entities
sensor->capture. Then check the new topology with
media-ctl --print-dot > file.dot && dot -Tps -o file.ps file.dot (then open file.ps with evince), or just
media-ctl -p if you don't want to generate the visual graph.
2) Find where struct
vimc_cap_ioctl_ops is defined, guess which function is called when the format is queried and add a
dump_stack() there. Compile and execute v4l2-utils to read the image format from any capture node and check the printed message.
3) Modify this function to return
-EINVAL, execute the ioctl
VIDIOC_G_FMT from userspace and
v4l2-compliance and see what happen
4) When user call the ioctl
VIDIOC_SUBDEV_S_FMT, ignore the value set by the user and just set to a fixed value of your choice, try changing the resolution from userspace and check with strace what happen
vimc_cap_process_frame(), there is a
memcpy() that copies the frame from the kernel buffer to the userspace buffer. Manipulate the frame to change the output image. suggestions: add a random noise, sum a value, set a fixed number for the pixels to generate a static color.
6) In the capture, refuse to start streaming by returning
-EPIPE and see what happens in userspace, now move this
-EPIPE to a function called
vimc_link_validate() (when is this function called? You can use
vimc_cap_process_frame(), comment the line that mark the buffer as done. See what happens in userspace when streaming (check with strace).
8) Find the function called when
VIDIOC_SUBDEV_S_FMT is called in the sensor pad, make it ignore the values that userspace is trying to set without returning error.
9) For a subdevice, a media bus code (mbus code) is almost equivalent to a pixelformat. The main difference is: In a real hardware, mbus code configures the order in how the bytes are transmitted to its internal bus, and the pixelformat indicates how the pixels are arranged in memory for the final image frame.
You can enumerate the media bus codes from a subdevice using
Enumerate the mbus codes from the sensor pad 0. Now try to find in the code which function is responsible for that, and make it enumerate just one mbus code instead of several.
10) Find function
vimc_sen_process_frame() and comment the call to
tpg_fill_plane_buffer(), what happen to the stream? What
tpg stands for? (tip: go to the tpg function definition, see which file is it, read the Kbuild from this tpg driver).
11) Find the default scaling factor and change it to 2.
12) There is a thread that is initialized from the capture to start processing frames. Find where this thread is and instead of generating frames in 60hz, make it generate in 10hz.