Raspberry Pi likes its Camera
Index To the Series |
1. Raspberery Pi likes Open Source |
2. Cross-Compiling for Raspberry Pi |
3. Cross-Compiling ITK for Raspberry Pi |
4. Raspberry Pi likes VTK |
5. Raspberry Pi likes Node.js |
Following on our series exploring the use of the Raspberry Pi,
Here we describe how to use Pi Camera.
First: The Hardware
We start with our trusty Raspberry Pi board (this is a Model B):
Then unbox the Raspberry Pi camera, freshly arrived from AdaFruit.
The connector of the Camera must be inserted in the socket that is just beneath the Ethernet port,
as pointed by this helpful pencil
We open the socket by pulling the black cover towards the Ethernet port:
and insert into it the connector of the camera, placing the lead side away from the Ethernet port
Then close the connector by pushing the black cover away from the Ethernet port:
Now, we look back at the camera module itself. It comes with a protective plastic cover:
That we carefully remove, using its open end:
This is how the set up looks from the side
and here is a mystical moment when the Raspberry Pi board becomes self-aware by looking at itself:
This completes the Hardware part of the installation.
Second: The Software Configuration
Now it is time to deal with the software configuration and usage.
We log in the Raspberry Pi, and update the packages with
sudo apt-get update
sudo apt-get upgrade |
Once the update / upgrade process finishes, we run the application that reconfigures the board:
sudo raspi-config |
This open the global configuration menu, where we can move the cursor to the “Enable Camera” option:
and select to enable the camera for real:
After pressing ENTER, we are sent back to the top configuration menu, and using the TAB key we can select to Finish the configuration changes:
At that point we are asked whether we want to reboot the board, and we opt for Yes:
The menu terminates and we see the typical shutdown warning message:
Third: The Testing
Once the board has rebooted, we log in again, and we can test the capture of images with the command:
raspistill -o image01.jpg |
That results in images like the following
Note that the camera has a fixed lens, and its focus starts at about 60cm away from the camera.
Lens modifications would be needed for taking macro images.
We know for example that this can be fixed with a piece of plastic and a drop of water.
A close up to the water drop:
and with it take macro pictures like the following:
Applications
This can be easily converted into a remote snapshot server, following the instructions in this
Raspberry Web Server Python Tutorial.
whose source code is in this Git hub repository
https://github.com/SUNY-Albany-CCI/Raspberry-Pi-Web-Server-With-Python
Going Beyond
The source code of the Raspberry Pi camera applications can be found at
https://github.com/raspberrypi/userland/tree/master/host_applications/linux/apps/raspicam
and it is configured with CMake.
Hi,
Thanks for the posts on the Raspberry Pi, nice job.
I am looking to make an “on the fly” (I was about to say in real time…) itk application with the camera module.
I am not yet looking at how to reduce computation time by ms but I feel using raspistill to save a file on the sd card and then read the file using itk is not the most efficient.
Would you know a way to bypass this step (saving the image file and then reading it) and directly get the image to ITK from the RAM?
My plan B is map the file in RAM, but it does not look very elegant.
Thanks.
BrunoD.
BrunoD.
Glad to hear that you liked the post.
You are right in that saving the image in a file is not the most efficient way to pass it to ITK.
I would suggest to take advantage of the fact that the source code of Raspistill is open source:
https://github.com/raspberrypi/userland/blob/master/host_applications/linux/apps/raspicam/RaspiStill.c
I would think on grabbing this code, compiling it with C++, and inserting into it the ITK pipeline that you want to call.
The writing of the image into a file seems to be in the function “encoder_buffer_callback()” in line 627:
https://github.com/raspberrypi/userland/blob/master/host_applications/linux/apps/raspicam/RaspiStill.c#L627
This could be the right place to intercept the code and populate an ITK image with the buffer, using the ImportImageFilter, here is an example on how to do this:
http://www.itk.org/Wiki/ITK/Examples/IO/ImportImageFilter
Please let us know if you run into problems,
or…
if you succeed, please let us know as well 🙂
Hi Luis,
Thanks.
I’ll give it a try (hopefully soon enough but you know) and I’ll put it online if/when it works (beware it’s more likely gonna be a proof of concept than reaching itk standards. I am more a java guy 😉 but working on it.)
Cheers.
BrunoD.
Luis
while waiting to get back to my rpi left elsewhere I made some research. Here are few the things I gathered:
1/: you pointed to the proper location in the raspistill.c. However there the image is in jpg format, the only one hardware accelerated (cf http://www.jvcref.com/files/PI/documentation/raspi-cam-docs.txt awful txt file but search for “–encoding”). Thus it has to be handled as a jpg file not a simple c matrix. Can itk::ImportImageFilter do this? I haven’t see any indication of it.
2/: regarding the data itself: the “bayer image” (raw image) can be saved in the metadata of the jpg file. But it seems to me that this image is gonna be harder to get than the “processed” one (the raw image does not appear in raspistill, I believe it is handle by some mmal function closer to the hardware).
3/ as for the competition 😉 a guy named Pierre posted a tutorial to make it work with opencv ( http://thinkrpi.wordpress.com/2013/05/22/opencvpi-cam-step-5-basic-use-display-a-picture/ ). Note: he “only” get the processed (jpg) image and did not mention the raw image.