Sunday, April 8, 2012

Wireless Sensor Networks

I've gotten a little bored working with Ubuntu and OpenCV, so I've decided to switch gears back to the wireless sensor network development.

I've discovered a few things during my research that would make a wireless sensor network possible, without my having to develop my own hardware. Perhaps once I have a working prototype, I'll design and build my own hardware, but that's for the future.

For now, I'll share the gadgets that may make the wireless sensor network feasible. In a wireless sensor network, you have Zones. In each zone, you'll have nodes that each do a certain task. You might have some controlling the AC system, a few monitoring the air quality and maybe one controlling your hot water heater. Nodes can do all sorts of things. Zones can either be assigned to do one task, or handle multiple tasks for a single area. Its all about perspective. I haven't quite figured this whole concept out, so bare with me.

In my Wireless Sensor Network, I would like Zone Control units to have many wireless features as possible so that you could work with WiFi, XBee and perhaps Bluetooth at the same time. It would be convenient to have a fast/strong processor to handle crunching some data before sending it off to where ever its going. For regular data, one can just use these units as a wireless gateway, but for images and video, it might be a good idea to compress the data to make sending it faster and easier. This is where there Dragrove comes in. It contains all the features of a wireless gateway, including an ARM processor and embedded Linux. The Dragrove also is Arduino compatible, has a place for an XBee module, as well as an RF module. You can even use the unit for local digital I/O data if you want to have just a wireless gateway without nodes. It has screw terminals on the side for loose wire, and connectors inside for special Dragrove shields.

Lower level units - Nodes - could simply be Arduino's with the control hardware (or sensor packs), with an XBee for communications. Very simple, doesn't need to be complex or any fancy data processing either. Even Data Display units can be nodes. Being a node doesn't necessarily mean that data is going one direction, its possible to receive as well.

To make this system complete, one must have a way to store all this data from the sensor network. Enter Pachube (pronounced PATCH-BAY). Pachube makes this all possible. Especially now that their service is FREE. You can have unlimited private feeds at no cost. So your local system can send data over to Pachube, the data can be crunched and handled, and you download your feeds back to your system for analysis. Its very convenient if you ask me. This allows mobility of your data. You can access your data feeds anywhere in the world, and keep an eye on your system. This also takes a lot of work off my hands, as the system is practically ready to go. You just need to setup your Feeds on Pachube, and setup your data transmission (i.e. uploading sensor data to Pachube), and off you go.

Now I've included links to few websites I've discovered that might help folks:


Paraimpu is a social tool that may help broadcasting certain data easier.

OpenPicus has Flyport modules which are stand-alone system on module with a customizable webserveremail clientTCP/UDP/FTP and more. OpenPicus also has Ethernet and WiFi modems for wireless sensor and hardware control networks; there are wide applications for this.

Flukso is a community based metering application.

Open Energy Monitor has all sorts of things you need to get started on a home energy monitoring tool.

For those of you who have a Two-Story home and have to wait while hot water comes to you, I have a solution for you. You no longer have to waste water flushing out the line until hot water arrives. A Hot Water Recirculating Pump is your solution. While they are a little pricey at first, in the long run they will save you money. Today water is becoming more and more valuable, so you really should conserve it as much as possible. This method allows you to get hot water faster, without wasting water (and if you plan right, you wont really waste much electricity either).

Well, that's all I've got for now.

Peace Out.

Wednesday, March 21, 2012

OpenCV Processing

I discovered something thats quite annoying when working with OpenCV and Processing, so I'll share here. Hopefully you can get it running without searching around.

To get OpenCV working with Processing, just follow the steps on then OpenCV website.

Then use the example given on Sparkfun, to get Face Tracking working. But, if you're like me, you'll probably run into the same problem. You try to compile, and Processing throws an error: 


Exception in thread "Animation Thread" java.lang.UnsatisfiedLinkError: hypermedia.video.OpenCV.capture(III)V 

This has to do with the following line of code:

opencv.capture( width, height );

Solve this error by copying the following files into your System32 folder.

cv100.dll
cvaux100.dll
cvcam100.dll
cxcore.dll
cxts001.dll


If copying them into the System32 folder doesnt work, copy the files into the WINDOWS folder. Thats it!

Note: Heres the forum thread where I discovered this solution.


Your example should work without you having to reload Processing. If you dont get any video feed, then the problem is with your Camera setup, not with Processing or OpenCV. In the past, I had a Logitech camera work on the fly (thats using Windows 7, x64bit), but now Im working with a Microsoft camera that I havent figured out how to get working. I'll post when I do.

Monday, March 12, 2012

Smart Camera

I think its about time for an update.

So this last month I was working on building a smart camera that tracks motion using Open CV, and determines range/speed for threat detection. I got the enclosure design down with Pro-E, so it was just a matter of getting electronics, and putting something together.

I had decided that a laser rangefinder was the best method to get the distance/speed measurements that I needed (range of 50-100ft). My budget for this camera was only a few hundred dollars, so was I in for a surprise. The laser rangefinding component is about $500 on its own. So I thought, maybe I can just build this thing myself, and cut out the middle-man who builds it.

I'll save you the trouble of researching it. Theres several things you'll need. First off is an Avalanche Photodiode. Those go for about 50 bucks, but since they require a few hundred volts to operate, you'll need a power circuit to supply that, and an amplifier circuit to read the data coming off. Thats not exactly easy, or cheap. Second is the laser: you'll need a solid-state hybrid pulse laser that gives you a 14W pulse every 200ns (I think). Those run for about 50 bucks also, but are a bit hard to acquire, because they are sold in Germany, and have a min order of a couple thousand. Then you'll need power, amplifier and sync circuits to power, run and synchronize the laser to the photodiode. You'll probably have to do all this on an ARM or an FPGA since calculations happen pretty fast for this thing to be accurate. So thats another 80 bucks for a BeagleBone. The custom laser/detector optics will run you for about a 100-200 bucks easy. So it all adds up to a cool $500. You can save yourself the trouble of custom making it, and order a prototype laser rangefinder from Lightware. So that wraps up the laser range finder.

Next up is the Pan/Tilt OpenCV camera. Its rather easy to make one. In fact theres a tutorial on Sparkfun to do it, but I ramped up the design a little bit. Instead of cheap analog servo's, I decided to use digital servos with metal gears (from Servocity). For the pan/tilt hardware, I decided on Servocity's SPT200. For a camera, I planned to use a cheap $40 Microsoft HD webcam. So all in all, maybe about $150 for the pan/tilt camera.

Lastly, I had to get some kind of water tight enclosure built for this thing. Since I had a design drawn in Pro-E, its simply a matter of getting it all 3D printed somewhere, and throwing hardware in it. That'll probably run me for another 200 bucks or so.

So, a cheap, couple hundred dollar project turned into a thousand dollar behemoth. Ive decided not to take up this challenge at the moment while looking for a job. Its just not practical.

What I have decided to do in the meantime, is get my Gas Monitoring system working. In fact I managed to get the MQ-6 LPG-LNG Gas Alarm working yesterday. The code I used was originally meant for Sandbox Electronics' LPG Gas Sensor Module, however, I changed a bit of it and added my alarm code. Now it triggers an alarm every time the gas level goes higher than a user threshold (thats currently set in the code, but later I'll use an interface to have a user type it in). I know thats rather easy, but setting up the calibration of the sensor isnt easy, so I had to dig around until I found someone who had done it, to make my life easier.

Im working on the MQ-7 CO Gas Alarm right now, and thats just a variation of the previous alarm. Only difference is the code has to adjust the power of the sensor heater via PWM to match the specs in the datasheet. 



Well thats all for now. Peace Out.

Saturday, February 11, 2012

Servos & Stuff

Well work has slowed down quite a bit since I was laid off, and lately Ive been spending most of my time just looking for work. Thankfully now I have all the time in the world to work on my project.

Heres a little update:

I dug up a few old Airtronics servos that Ive had for awhile, so I fired them up to make sure they work. I used some code freely available online (I'll upload it when I remember the website). Basically I can move the servo back and forth with the '<' and '>' keys. So I managed to test both servo's that I'll be using with the Pan/Tilt camera. I still have to buy myself a nice HD camera that I can use with all the OpenCV stuff that I'll be doing. So far all the OpenCV examples Ive run have been on images and video files, not with live video from a camera. Once I acquire that camera I can make a fully automated pan/tilt camera as seen on Sparkfun.

Step two is building a custom laser rangefinder.

Thats all Ive got for now. Peace out.

Sunday, January 22, 2012

Open CV

Im going to step through a simple example with OpenCV 2.3.1 Mega Pack on Visual Studio 2010 and Windows 7 x64. I got the run around from searching around on the web for the past couple of days, so Im sure everyone will benefit from this solution.

I followed several tutorials step by step, with some changes, and it finally works. You must make the changes to get OpenCV 2.3.1 working. I'll walk you through the whole tutorial, to guarantee you dont miss anything.

Step 1: Install OpenCV 2.0

Go to the SourceForge page of OpenCV and install OpenCV 2.3.1 Mega Pack for Windows. Install the package to 'C:\OpenCV2.3.1'. If you look in the directory, you'll see a folder called build. We need to use that compile OpenCV.

Step 2: Compile Everything

Before you start working with Visual Studio, you need to compile the code into a VS project. You can do that using CMake.

Once you install, start up the command line prompt. (Win+R, type cmd and press enter). Type cmake and you should be able to see something on screen.

Next, type in this line:
cd C:\OpenCV2.3.1
mkdir vs2010_build
cd vs2010_build
cmake -D:CMAKE_BUILD_TYPE=RELEASE C:\OpenCV2.3.1
You’ll see a lot of things happening. And after a while, the process would complete, and you'll have a Visual Studio Project for OpenCV.

Step 3: Compile the project
This one is simple. Double click the OpenCV project, and compile it in Visual Studio 2010 (or Express). It’ll take a lot of time. Compiling the samples and the entire library itself takes a real long time.


Step 4: Sample Project
Then start a new instance of Microsoft Visual Studio 2010 ( or Express).
  • File -> New -> Project
  • Name: 'C:\Tutorials\OpenCV\OpenCV_Hello'...'OK'...'Finish'
  • Use the code below:
// OpenCV_Hello.cpp : 
// Microsoft Visual C++ 2010 Express and OpenCV 2.3.1

#include "stdafx.h"

#include "cv.h"
#include "highgui.h"

int _tmain(int argc, _TCHAR* argv[])
{
        IplImage *img = cvLoadImage("lenna_small.png");
        cvNamedWindow("Image:",1);
        cvShowImage("Image:",img);

        cvWaitKey();
        cvDestroyWindow("Image:");
        cvReleaseImage(&img);

        return 0;
}
Now comes the important part. Pay attention to this, or you'll be dealing with the same errors I got.

Step 5: Configure Project Directories
  • In VS 2010, Go to 
    • Project -> OpenCV_Helloworld Properties...Configuration Properties -> VC++ Directories
  •  Executable Directories -> use the drop down menu -> ... add: 'C:\OpenCV2.3.1\vs2010_build\bin\Debug' 
  • Include Directories -> use the drop down menu -> ... add: 'C:\OpenCV2.3.1\include\opencv;' 
  • Library Directories -> use the drop down menu -> ... add: 'C:\OpenCV2.3.1\vs2010_build\lib\Debug' 
  • Source Directories -> use the drop down menu -> ... add: 'C:\OpenCV2.3.1\include\opencv;' 
  • Linker ->; Input -> Additional Dependencies... [Use the top left drop down menu to find each build, i.e. where it says Active (Debug)]
    • For Debug Builds -> use the drop down menu -> ... add: 'opencv_core231.lib; opencv_ml231.lib; opencv_highgui231.lib' 

    • For Release Builds -> use the drop down menu -> ... add: 'opencv_core231.lib; opencv_ml231.lib; opencv_highgui231.lib' 

    • Ignore all Default Libraries -> 'No'  


  • Linker -> General-> Additional Dependencies -> use the drop down menu -> ... add: 'C:\OpenCV2.3.1\include\opencv;C:\OpenCV2.3.1\build\x64\vc10\bin;'

Once you have these configurations, there is one last thing you need to do. This is a critical step, or the program wont compile.

  • Go to Build -> Configuration Manager -> Platform -> use the drop down menu to select x64 instead of x86.
  • Now Build the project, and it should compile pretty quick. Then just run the project via the Debug menu. And thats it! You should get an image box with your sample image.

In my search online for OpenCV tutorials and code, Ive come across quite a lot of previous work. An image crop of Lenna for OpenCV purposes can be found here. The following websites were directly used (including images and code) for this tutorial:

  1. http://www.aishack.in/2010/02/hello-world-with-images/
  2. http://www.aishack.in/2010/03/installing-and-configuring-opencv-2-0-on-windows
  3. http://opencv.willowgarage.com/wiki/VisualC%2B%2B_VS2010
  4. http://stackoverflow.com/questions/7011238/opencv-2-3-c-visual-studio-2010
  5. http://abdullahakay.blogspot.com/2011/08/opencv-23-linker-error-with-vs2010.html
  6. http://siddhantahuja.wordpress.com/2011/07/18/getting-started-with-opencv-2-3-in-microsoft-visual-studio-2010-in-windows-7-64-bit/
If you have no idea what OpenCV is or how to use it, I found a pretty nifty set of tutorials here to get you started on Windows.

If you want to run OpenCV using Python on Windows, this blog will show you the way. If you want to run OpenCV using Processing and Javascript, this blog will help; this is useful for running OpenCV on the Beagle Board xM and the BeagleBone. The OpenCV Java library is here.

One of the most useful tutorials that Ive found thus far has got to be the one by Sparkfun. Its not just an OpenCV tutorial, but it has a motion tracking pan/tilt camera with free code!

I know these sites were very beneficial for my education of OpenCV and for projects, and I hope they help you too.

Wednesday, January 18, 2012

Angstrom, Cloud 9, Oh My!

Well after a lot of troubleshooting with Windows, I finally managed to get the BeagleBone recognized on my Windows 7 computer (Vista wont see it at all no matter what I do). So as a storage device, it works. Next I tried to access it via serial via Putty, and that didnt seem to be working. Windows couldnt find the com port! So without the com port, I couldnt access Cloud 9 IDE to do python development, and try the sample code that turns an LED on and off.

Fortunately, I found the solution. Since all the drivers you install are not signed, you have to reboot your system, then before Windows boots (but after your BIOS screen shows), hit F8 a few times, and you'll load into the Windows boot options. Select 'Disable Unsigned Driver Enforcement' and continue. Once Windows loads up (and you login), the COM port should be visible in your hardware manager. Screenshot of F8 screen shown below (Vista and Win 7 have the same screen).



Picture acquired here.
Find out what number it is, then use Putty to login to the BeagleBoard, using the standard settings given on the BealgeBone site: '115200' bits per second, '8' data bits, 'None' parity, '1' stop bits, and 'none' flow control. That gets you into the Angstrom Linux shell. It takes a few minutes, but once it loads, login as root, and bam you're in.

One thing I noticed was once I logged in as root, I could now visit all the links that the BeagleBone Instructions talk about:

BeagleBone 101 presentation - http://192.168.7.2

This application is largely self explanatory. The source can be edited using the Cloud9 IDE. The application is 'bone101.js'.

GateOne - https://192.168.7.2

For documentation, please visit the on-line GateOne Documentation. Note: This installation might be a bit slow, but we are actively working on improving this with the author. 

Cloud9 IDE - http://192.168.7.2:3000

This development environment supports direct execution of JavaScript via Node.JS. Visit nodejs.org for information on programming in Node.JS. The IDE is pre-populated with the source and demos of the BoneScript project.

So, once you have access to Cloud 9 IDE, all you have to do is load the sample Blink.py program and then run it. You should see one of your LED's on the board start blinking. I havent figured out which pin is the other LED that comes on, but Im quite sure Im supposed to supply the LED. I'll post when Ive got that figured out.

I'll probably post a couple more tutorials on my progress with this board, as there is absolutely nothing on the internet to help anyone along (videos arent the same as instructional tutorials). So if anyone has any questions for me, please ask away, and I'll try to answer as best as I can.


Edit: If for some reason you have to turn off the computer, then remember F8 on the startup, and then once you're logged into Windows, restart the BeagleBone, by hitting the little reset button on the board. Give it some time (about 5 minutes) to reboot, and it should be good to go. You'll have to eject the BeagleBone drive again if you want to start development on the Cloud 9 IDE or on Angstrom.

Thursday, January 12, 2012

My BeagleBone Arrives!

Well my BeagleBone just arrived today, and Im quite excited to begin working with it. (photo obtained here)


This development platform is made by Texas Instruments, and features a whole host of features. Quote from Adafruit website:
At over 1.5 billion Dhrystone operations per second and vector floating point arithmetic operations, the BeagleBone is capable of not just interfacing to all of your robotics motor drivers, location or pressure sensors and 2D or 3D cameras, but also running OpenCV, OpenNI and other image collection and analysis software to recognize the objects around your robot and the gestures you might make to control it. Through HDMI, VGA or LCD expansion boards, it is capable of decoding and displaying mutliple video formats utilizing a completely open source software stack and synchronizing playback over Ethernet or USB with other BeagleBoards to create massive video walls. If what you are into is building 3D printers, then the BeagleBone has the extensive PWM capabilities, the on-chip Ethernet and the 3D rendering and manipulation capabilities all help you eliminate both your underpowered microcontroller-based controller board as well as that PC from your basement.
  • Board size: 3.4″ x 2.1″
  • Shipped with 2GB microSD card with the Angstrom Distribution with node.js and Cloud9 IDE
  • Single cable development environment with built-in FTDI-based serial/JTAG and on-board hub to give the same cable simultaneous access to a USB device port on the target processor
  • Industry standard 3.3V I/Os on the expansion headers with easy-to-use 0.1″ spacing
  • On-chip Ethernet, not off of USB
  • 256MB of DDR2
  • 700-MHz super-scalar ARM Cortex™-A8
  • Easier to clone thanks to larger pitch on BGA devices (0.8mm vs. 0.4mm), no package-on-package memories, standard DDR2 vs. LPDDR, integrated USB PHYs and more.
So, this will be the development board for my OpenCV Smart Security System. I found a whole host of security projects out there dealing with the Beagle Board, but only one project with security systems. However, mine ads facial recognition, motion tracking, camera tracking, and a whole list of other features. 


As far as my RFID work is concerned, Ive made slow progress. However, I have experimented with some examples, and have an idea how the RedBee functions now. I'll have to write some code to see if I can run it independent from the computer (RedBee to Arduino communications). That way I'll have an idea of how to integrate this into my project.