Robotic Road Vehicles

Robotic cars such as Google’s are proven to work on regular streets and roads. The laws of some states (Nevada being first) are being updated to allow robotic cars to be in control.

At this point we can speculate for fun what commercial or utility style vehicles might come first in being updated for robot control, as these seem a prime use case for automation.

One vehicle and job I think could be first are street sweepers, for several reasons. They operate mostly at night in quiet traffic times, they drive quite slowly mostly around slower city streets when sweeping, and they involve overnight shift work for the current operators which is not easy or pleasant. My guess is they don’t involve as much decision making and equipment manipulation as say a snow plow would require, but then again I’m not a street sweeper operator so that may not be the case.

What specialized vehicles or jobs do you foresee robotic vehicle technology moving into?

What’s Immersive Systems all about?

Robots and robotics are well known as highly entertaining and educational to masses of people, kids and adults alike.

The problem today is if you want to engage with others in robot play, you are limited to infrequent and expensive tournaments, or highly limited opportunities with local friends.

My vision to solve this problem (and change the world :-) ) is to provide compelling but inexpensive robot games over the internet to everyone everywhere, anytime, just as easy as it is to log into WoW and play challenging games with friends local and remote, many of whom you will likely never meet in person.

The educational aspect of robot games can be fantastic, covering all areas of STEAM (Science, Technology, Engineering, Arts, Math). Really, what more can you ask for? This covers almost everything great in life, except maybe love. OK, food too. Alright, nature too. Well, it’s still pretty fantastic!

We have all sorts of great ideas for competitive and collaborative robot games where the robots and their environments are real and fantastic and the virtual world comes into play also.

A new application for robotic car technology

Did you know, 1000s of Road Traffic Control people have been injured when hit by vehicles, 100s have been killed and more will be injured and killed every year. It is one of the most hazardous jobs there is. Here’s some statistics I found from a 5 minute web search:


“More than 70 traffic control persons in BC have been injured in the past five years, many of them seriously. Three of them have been killed in that time. Similar statistics exist across Canada.”


“In Ontario about 50 road construction workers have been killed and some 3,500 injured over the last five years…”

What about the idea of taking robotic self-driving car technology like Google is working on, take it out of the car and apply it to a Road Traffic Control robot? I think this would be a simpler case than a self-driving car because the robot doesn’t actually need highly sophisticated navigation for itself. Instead it can use the scanners, cameras and other sensors used by self-driving cars to understand the traffic and pedestrian flows around it, with the result to control the traffic control signs such as stop, slow, move ahead with caution, etc.

The road traffic control market is much smaller that the automobile market but surely it’s a big enough market to support a good sized robotics company and the impact to lives worldwide would be substantial. The technology is surely more than good enough to work for this application with some engineering to adapt it to this use case, and I expect the regulatory, legislation and licensing issues will be much, much less a problem than for building self-driving cars.

Any one else also think this is a good idea?

Fun with technical trend forecasting

There are a number of really cheap, really small System-on-Chip (SoC) processor powered computer boards running Linux or Android coming on the market. These are dropping down to the $25 range (for a version of the Raspberry Pi) and while the models available today may not have all the performance desired for running PC applications on them, such as a web browser, it will only take a generation or two of Moores Law for them to have all the performance wanted and at this level of price point. These boards have connections to run displays, accept SD memory cards, support USB peripherals like Wifi dongles, cameras, mice and keyboards, and output audio.

What will the widespread availability of cheap, tiny, powerful, networked computers mean? Well, for one I think it’s very good news for robotics. So far, one of the cheaper ways to get a decent amount of processing and wireless connectivity onto a robot is to use a standard laptop or netbook and many robots have adopted this method. A more recent approach has been to leverage smart phones as the wirelessly connected brains of a robot. These are both great approaches but are still in the $300 plus category, which is too much for truly inexpensive and accessible robots, say for children and games. From now, and over the next few years, these new computer boards will make a fantastic processor brain for small inexpensive robots and will support common Linux and Android development tool chains.

Another idea for such computer boards is to use as a wireless computer plugged into your TV’s HDMI input jack. They can bring Internet content and natural user interface interactive controls such as voice, gesture and more to the TV without needing any set top box to take up shelf space.

They will also become cheap enough to embed into many appliances and home systems supporting Internet connectivity and bringing on the Internet of Things.

I think these are a good sign of what’s to come as general purpose computers become so small, cheap, low power and performant that they become ubiquitous and therefor fade in to the background of our lives. Eventually we will simply expect most of the objects we interact with to have computational elements and Internet connectivity.

Example small computer boards:

$25-$35 for Raspberry Pi A or B:

$49 for the forthcoming APC 8750:

~$80 for MK802:–wholesalers.html

<$100 for the forthcoming Valueplus Tizzbird N1:

<$150 for the forthcoming and really interesting Rascal from Rascal Micro:

Is the Raspberry Pi an ideal DIY robot processor board?

The Rasberry Pi is receiving more public attention than almost any other computer release I can think of. A lot of the excitement is about the ultra low cost (only $35 for the more expensive B board) and all the great stuff you get for the money.

Other than a mention or two by the Raspberry Pi creators I haven’t yet seen a discussion of its potential benefits for DIY robotics – and that’s puzzling as it would seem a great option.

The Raspberry Pi falls in the Single Board Computer (SBC) bucket and is not a traditional microcontroller board. This means you don’t get onboard analog to digital converters (ADCs) or a whole lot of general purpose IO pins (GPIOs). You do get 8 GPIO pins plus a few dedicated communication interface pins which should be plenty with an optional expansion board. You also get a 700MHz ARM processor and all the standard computer connectivity options like Ethernet (on the B board), 2 x USB 2.0, HDMI for HD video & audio, composite video, audio jack, and SD Card for the Linux OS and application code which is all really great for putting a DIY robot together – much more than the typical microcontroller board.

The GPU/graphics capability of the System On Chip (SoC) ARM processor is well beyond that required for most robots, but maybe there’s an avenue here for some cool augmented reality effects on a live video feed from the robot!

Power draw of the board is mentioned as a very respectable 2 watts or less – again a great spec for mobile DIY robots.

What major additions does it need to support a robot?
- It needs to be fed power through a micro-usb cable, and apparently an easy way to do this is to hack an inexpensive cell phone charger cable.

- It needs a Wi-Fi USB adaptor, of which dozens are available for anywhere from approximately $14 and up. The supported Tenda USB 11n adapter is approx. $20 on

- It needs a USB Camera of which their are hundreds of options, but auto focus is a desired feature. The Hercules DualPix Infinite Webcam seems a good low cost contender at $35, and the Microsoft LifeCam Webcams at $35 to $50 are a reasonable choice & are reported to support Linux through the Linux UVC driver.

- It needs a motor control board, and several groups are working on producing them for the Raspberry Pi, including the GertBoard shown here. No price or availability info is up for the GertBoard just yet,but a solution will be available soon.

With your chassis of choice and these few additional parts you should have a powerful, almost plug together basic robot for something on the order of $150 – $200 plus tax/S&H, and of course plus a fully outfitted chassis with motors, gearboxes, wheels/tracks, and battery. Examples of good ones are the Surveyor SRV1 chassis from osbots ($185), or the Dagu Wild Thumper 4WD All-Terrain chassis ($215) but there are many cheaper options. The total for a robot built this way should be on the order of $350-400 plus tax,S&H. Not bad for a really solid robot hardware platform.

Raspberry Pi model B

Raspberry Pi model B

FSR Matrix Sensors for Human-Computer Interaction

In a previous post I talked about a capacitive multi-touch solution that can work well for multi-touch computer control, where you don’t want or need a display as part of the multi-touch input surface.

Another alternative that provides even more expressive control is to use a Force Sensing Resistor (FSR) Matrix sensor which not only gives the x and y position of numerous touch points, but also gives a z parameter for how much force is being applied at each touch point. One company, Sensitronics (owned and run by the inventor of FSR sensors, Frank Eventoff), makes the raw matrix sensor sheet and they are happy to partner with other companies who will built the rest of the input device electronics, PC drivers and libraries. The materials used for FSR matrix sensors make them opaque, so this technology will work for trackpads, graphics tablets, etc, but not for displays. Sensitonics recently made available larger sensor sheets, moving beyond the smaller sizes (typically laptop trackpad size range) previously available.

However, Sensitronics has recently also been working on breakthrough clear materials, and can be contacted regarding this new innovation. Clear FSR matrix sensor sheet applied to a display will provide a very attractive alternative multi-touch display for many use cases.

Note that FSR sensors can also be produced as sensing strips, rings or individual buttons, however my primary interest has been in multi-touch x-y-z grids for gesture control of computers.

For anyone wanting to use an FSR matrix sensor in their project there is currently a drawback – input devices utilizing the technology are not widely available, and it is a significant amount of engineering work to develop the electronics, PC drivers and libraries to support a usable solution. One company I know of is producing a more complete FSR matrix solution – Sensible UI from Korea. Check out their ArduMT if you want a development plug in solution for PCs.

Sensible UI - ArduMT multi-touch input controller

Sensible UI - ArduMT multi-touch input controller

Two potential uses for office robots

While not about robot gaming, here’s a couple of potential applications for the latest generation of TurtleBots, BilliBots, and other wheeled robots.

I work in a large office where there are several shared printers at strategic locations to serve hundreds of people. One thing that takes time and wastes paper is the fact that you have to walk to the printer to get your print jobs, and often you tell yourself you’ll get it in a minute and then forget to pick it up. Consequently, uncollected print jobs collect at the printer over time, only to eventually be tossed in the recycle bin. These represent a wasted use of paper, toner, energy and wear on the printer.

With mobile wheeled robots really starting to come down in price, representing ony a small
portion of the cost of one of these large office printers, perhaps we are at the stage where an enterprising company can produce a robot to interface with the printers and automatically deliver print jobs to the print job originator’s desk. For a smaller office one way to manage this easier may be to put a Wi-Fi laser printer on the robot. The robot could self connect to 120v wall power at one or more charging stations to power the printer (and charge the robot’s batteries). Then, once it receives a print job it locates the originator’s desk on a map of the office and drives there and the person just picks the job out of the paper tray. For large offices that need to keep the large printers the robot would have to be more sophisticated around paper handling, but this may not be a big problem. One problem to solve would be how to transfer the print job originator’s ID to the robot so it can figure out who’s desk to drive to.
Another issue we have is mail distribution. No one wants the job of hand delivering mail to
each staff persons desk so no one does it and it piles up in a central location. But many staff forget or don’t realize they need to look for their mail in the central pile, and old mail builds up.

It should be possible to drop the days mail received at reception into a hopper on a robot, which reads the name on each envelope, locates that person’s desk on a map of the office and auto delivers the mail in the building.

Good Multi-Touch Input Devices

I do human-computer interaction (HCI) research in the medical imaging industry, and as part of that work I’ve been tracking touch and gesture surfaces for PC computer control and working up prototypes. Since there is a close correspondence between HCI and human machine interaction (HMI), which is important for robotics, I have a couple of insights to share on recently available very good quality multi-touch pad input devices useful for all sorts of domains including robotics, digital music, and computer users and researchers of all stripes. I’ll describe the first in this post, and add the second in the next post.

Hackers, researchers and DIY’ers have wanted a good programmable solution for desktop touch and gesture input to PCs for a long time. Consumer device solutions without programmable APIs or SDKs have been hacked by brave souls, such as the Wacom Bamboo line of graphics tablets or the Apple Magic Pad, to gain some access to the sensor output through hacked drivers.

These solutions were always a stop gap and not accessible to many because they were unsupported by the vendors, limited in their access and brittle to changes. A quiet change by Wacom with their most recent Bamboo graphics tablet line, released Sept 27, 2011, has completely changed this picture. The Bamboo Capture and Create models now boast significantly revamped capacitive multi-touch input that rivals what you are used to on mobile tablet devices like the iPad. The biggest change is they have also developed and released an SDK to support programmatic access to the touch information coming from these tablets. This makes them ideal for people wanting a custom multi-touch solution.

From using the SDK I’ve discovered that, even better, you can write an application to interpret the touch and gesture input how you want to, while at the same time utilizing your choice of the out-of-box gestures that come with the standard Bamboo driver such as pinch zoom, scroll, full mouse control, etc. They provide a driver configuration panel to enable or disable any out-of-the box touch and gesture support so it’s easy to create a solution mixing some default and custom gestures. Of course, being Wacom graphics tablets they all support pen as well as touch input. The devices span various sizes, prices, and come with a wireless option. I’ve tested all this and it works great, so I’m hoping this helps spread the news to the Internet communities that are interested in multi-touch and the natural user interface (NUI).

Wacom Bamboo Create Tablet

Wacom Bamboo Create Tablet


First post – An intro to Immersive Systems

Immersive Systems is being formed to bring together robots and people who want to play with robots.

Robots are poised to become a much bigger part of everyday life. In most cases today they work behind the scenes out of the public eye, such as in factories, warehouses, research labs, and the military. In some cases simple autonomous robots are part of our home lives as educational robots or toys, vacuum cleaners, floor scrubbers, pool and gutter cleaners and lawn mowers.  Robots are about to have a much bigger impact in agriculture, enterprises and in more versatile roles in manufacturing working in close proximity with people.  All told there are many, many millions of robots in active use today, but we expect their numbers and uses to mushroom in the coming decades. Looking at what robotics research labs in academia and industry are working on today it’s easy to extrapolate amazing uses for robots.

For the robotics hobbyist and educational markets, capabilities are mounting, prices are
dropping, and there is a rich and growing set of options for all kinds of fun robots to explore and play with. Young people today perceive the possibilities with robots and are naturally drawn to learn about them – evidenced by all the robot competitions worldwide with 100s of thousands of participants. Many educators and politicians see robots as a key discipline in young people’s education to learn the core STEM fields – science, technology, engineering and math. One aspect we believe holding this whole field back is the expense
and difficulty today to play robot games with others, which is a lot more fun than playing with a robot just by yourself – as all those tournaments and competitions attest too. We
plan on improving this situation. Stay tuned for more.