Tuesday, August 16, 2016

Improving the state of security in the Internet of Things

I've spent the past few months as the Solutions Architect at resin.io, working to improve the security and functionality update process for Internet of Things (IoT) devices. As I've been doing this, I've been learning an enormous amount about the current state of the industry.

It's pretty clear that right now managing and updating IoT devices sucks. There are countless examples of devices being left vulnerable, exposed to the world, abandoned or even intentionally destroyed by their producers because they were too hard to keep updated.

There are a few factors that I see as contributing to this problem:
  • a need for Free/Open tools for securing devices
  • a need for Free/Open tools for updating and managing devices
  • a need for education within the industry

Some more cynical people might replace "education" with "an attitude adjustment" in this list, but I really do think that most of the blame can be placed on ignorance rather than malice here. The tools are important, even crucial, so I'll expand on those points in the future. But even the best tools are worthless without the understanding of why they're so necessary.

A hardware mindset

Most people and companies involved with hardware manufacturing have what I call a "hardware mindset": the idea that a product is designed and manufactured exactly once and then identical units are distributed to the marketplace. This is understandable and a completely legitimate way of thinking when you're creating self-contained devices. If you've been building widgets for decades (or even longer!) then you've gotten very good at this sort of process. You spend a lot of time up front on design, getting it as perfect as you can, but then once you're done with that you start up the factory and move on to the next thing. As long as your widgets aren't catching on fire or falling to pieces under normal use, you don't really have a problem.

The problems do come in when you start talking about connected devices. When you were making widgets that came out of the factory perfectly safe, the only way they became dangerous is if someone intentionally opened them and tinkered with their parts.  And if someone gets hurt when they modify your product, that's really on them.  But once something is connected to nearby devices or the Internet, it's able to be opened up and modified by someone who might be thousands of miles away, and there's usually no external sign of tampering. Even without an actual attacker, it's simply impossible to test every permutation of interactions between your networked widget and all the other devices in the world. Your customers can be hurt by your product without any fault of their own! Suddenly you need a way to fix these issues as they arrive.  You must update your devices in the field to protect your customers.

Simply put, once the device comes out of the factory, you have to stop thinking like a hardware company and start thinking like a software company.

Moving to a software mindset

Software companies work very differently than hardware companies. There was a brief period where the idea was the same -- write some bytes to a disk or CD and ship them out and you're done -- but those days have been gone for decades. Now the goal isn't to produce a single piece of static software but to have a system that allows constant updating and refinement of the product. There can be no assumption that any program is ever perfect or complete or secure. The only way to protect yourself and your customer is to make the process of updating and fixing the software so fast and foolproof that keeping it updated becomes a matter of course.

IoT and hardware companies must begin thinking about their products primarily as software that happens to have a hardware component. This is the only way that the state of security in the Internet of Things will improve.

Sunday, May 15, 2016

How to fix a USB audio device that interferes with mouse clicks in Linux

(This post includes an overview of how to solve similar problems to the one I encountered.  If you're looking for just the solution?  Skip to "The solution" below.)

I recently got a Jabra USB headset.  It's a very nice headset with active noise cancelling, a decent microphone, and a USB audio controller that includes a few buttons to control the audio volume.  These buttons work directly with the OS by presenting as a USB keyboard and sending the same keystrokes that the volume up/down buttons on a real keyboard would.

Unfortunately, it also presents itself as a USB mouse with 12 buttons and takes precedence over any actual pointing devices attached to the system.  Why a USB audio device needs to present itself to the system as a mouse is beyond me.  (The only theory I've heard that makes any sense at all is that there's some Windows-specific driver that would intercept mouse events and do some sort of deeper OS integration.)  Whatever the reason, it's a terrible "feature" and an abuse of the USB HID specification.

But rant over. Fortunately because X (the graphics engine of desktop Linux systems) allows the user fine control over the system, this is pretty easy to fix.  I'm including the steps I used to troubleshoot and fix this so that if anyone runs into similar issues in the future they know what to do.

Narrowing the issue down to X

As soon as I started using this headset I noticed something strange: I couldn't click the mouse anywhere outside of the window that I had on top when I plugged it in.  This persisted until I unplugged the USB cable, when suddenly mouse clicks worked again.  When I plugged the USB cable back in, mouse clicks stopped working again.  Weird, but at least consistent!

Knowing that it had something to do with the USB device I started looking at a few system logs.  I ran 'dmesg', which shows the device driver logs for the system, but didn't seem anything unusual.  So it wasn't a Linux issue, which meant it had to be in X.

Diagnosing in X

The first thing I did was to see what X saw when I had the device plugged in.  I ran 'xinput', which shows a list of all of the input devices attached to the system.  Before plugging in the device I saw:

⎡ Virtual core pointer                     id=2 [master pointer  (3)]
⎜   ↳ Virtual core XTEST pointer               id=4 [slave  pointer  (2)]
⎜   ↳ ELAN Touchscreen                         id=10 [slave  pointer  (2)]
⎜   ↳ DLL0665:01 06CB:76AD Touchpad           id=12 [slave  pointer  (2)]
⎣ Virtual core keyboard                   id=3 [master keyboard (2)]
    ↳ Virtual core XTEST keyboard             id=5 [slave  keyboard (3)]
    ↳ Power Button                             id=6 [slave  keyboard (3)]
    ↳ Video Bus                               id=7 [slave  keyboard (3)]
    ↳ Power Button                             id=8 [slave  keyboard (3)]
    ↳ Sleep Button                             id=9 [slave  keyboard (3)]
    ↳ Integrated_Webcam_HD                     id=11 [slave  keyboard (3)]
    ↳ AT Translated Set 2 keyboard             id=13 [slave  keyboard (3)]
    ↳ Dell WMI hotkeys                         id=14 [slave  keyboard (3)]

and afterwards I saw:

⎡ Virtual core pointer                     id=2 [master pointer  (3)]
⎜   ↳ Virtual core XTEST pointer               id=4 [slave  pointer  (2)]
⎜   ↳ ELAN Touchscreen                         id=10 [slave  pointer  (2)]
⎜   ↳ DLL0665:01 06CB:76AD Touchpad           id=12 [slave  pointer  (2)]
⎜   ↳ GN Netcom A/S Jabra EVOLVE LINK MS       id=15 [slave  pointer  (2)]
⎣ Virtual core keyboard                   id=3 [master keyboard (2)]
    ↳ Virtual core XTEST keyboard             id=5 [slave  keyboard (3)]
    ↳ Power Button                             id=6 [slave  keyboard (3)]
    ↳ Video Bus                               id=7 [slave  keyboard (3)]
    ↳ Power Button                             id=8 [slave  keyboard (3)]
    ↳ Sleep Button                             id=9 [slave  keyboard (3)]
    ↳ Integrated_Webcam_HD                     id=11 [slave  keyboard (3)]
    ↳ AT Translated Set 2 keyboard             id=13 [slave  keyboard (3)]
    ↳ Dell WMI hotkeys                         id=14 [slave  keyboard (3)]

The highlighted line shows the change: this was the device I had plugged in.  (It appears that GN Netcom A/S is the actual manufacturer and Jabra the brand.)

This told me I was on the right track: the USB headset was showing up as an input device.  This could be why it was interfering with the mouse!  But I needed more information.  Fortunately, xinput has a more verbose mode, 'xinput list --long':

⎜   ↳ GN Netcom A/S Jabra EVOLVE LINK MS       id=15 [slave  pointer  (2)]
Reporting 16 classes:
Class originated from: 15. Type: XIButtonClass
Buttons supported: 12
Button labels: "Button 0" "Button 1" "Button 2" "Button Wheel Up" "Button Wheel Down" "Button Horiz Wheel Left" "Button Horiz Wheel Right" "Button 3" "Button 4" "Button 5" "Button 6" "Button 7"
Button state:
Class originated from: 15. Type: XIKeyClass
Keycodes supported: 248
Class originated from: 15. Type: XIValuatorClass
Detail for Valuator 0:
 Label: Abs X
 Range: 0.000000 - 1000.000000
 Resolution: 0 units/m
 Mode: absolute
 Current value: 1600.000000

(I've trimmed this down quite a bit; 'xinput list --long' really is long.)

From this I saw that it was reporting itself as a mouse device.  (Why does a USB headset need to pretend to the OS to be a mouse?  I still don't know.)  Something in that list of mouse buttons was messing with my actual mouse.  So I needed to disable that functionality while leaving the rest unchanged.

Fine-grained device control in X

The X device control system allows you to make a lot of tweaks to how devices work.  You can change how a keyboard or mouse work, how quickly the pointer moves when you move the mouse, how it accelerates based on the direction you are moving... all sorts of things.  But for my purposes I needed to use the functionality to change the order of mouse buttons.  This is normally for doing things like making a mouse more comfortable for left-handed users (e.g. changing the button order on the mouse from "1 2 3" to "3 2 1" so the left-hand index finger is the primary button).  But you can also map physical buttons to "button 0", which means disabled.  So I just disabled all 12 (why 12‽) mouse buttons.

I did this with the help of the X device control manual here: www.x.org/archive/X11R7.5/doc/man/man4/evdev.4.html

The device control system looks in a special location for files that contain instructions  to change how devices work.  On Ubuntu this location is /usr/share/X11/xorg.conf.d/ but may be different on other Linux distributions.

The solultion

I created /usr/share/X11/xorg.conf.d/50-jabra.conf with the following contents:

Section "InputClass"
Identifier "Jabra"
        MatchProduct "GN Netcom A/S Jabra EVOLVE LINK MS"
Option "ButtonMapping" "0 0 0 0 0 0 0 0 0 0 0 0"

I then restarted the X system to make sure the changes would take effect.  (A reboot is the simplest way to do this if you're unsure how.)

Once that was finished I could plug and unplug my USB headset with no issues!


I submitted a bug report to the Ubuntu bug tracking system with my fix attached.  Hopefully it will be picked up and added to the distribution.  In the meantime I'll leave this post up in case anyone else has similar issues.

Saturday, May 7, 2016

Communicating internationally and being understood

English is the language of international business.  It's used as the default language of international conferences, technical papers, and contracts.  I've lost count of the number of conversations I've overheard where, say, a German and French speaker are communicating with each other in English rather than trying to use either of their native languages.

But unfortunately there's a group of people that often makes communication more difficult than it needs to be: native English speakers!  Too often a native speaker will forget that other people are working much harder than they are in a conversation and will slip into bad habits that make it even harder.

I've noticed some simple things that native speakers can do to fix this.  In my experience, these have a dramatic impact on how easily others are able to understand spoken English.  Think of these as a way to help your listener.  You're not meeting them halfway (as they are speaking your language, after all!) but at least you're removing some roadblocks in their path.

Keys to being more easily understood

  • Speak slowly.
  • Face your listener.
  • Keep your mouth clear.
  • Don't laugh while speaking.
  • Keep your sentences clear and direct.

Speak slowly

This should almost go without saying, but is easily forgotten, especially if you are speaking with someone who speaks English nearly fluently.

Consciously make an effort to slow your speech down a bit more than normal.  Not so slowly that others will think you're making fun of them, but enough to ensure that you are not running words together or moving so fast that your listener can't keep up.  I usually mentally aim to speak at about 85-90% of my normal pace -- enough to remind me to take my time and not rush.

Face your listener

This too is common sense.  Partially this is to help you be heard clearly.  But more importantly, keeping your focus on your listener will help you see if they are getting lost or confused.  It's better to slow down and get everyone on the same page as soon as there's an issue rather than realizing that you haven't been understood for several minutes.

If you your listener starts to look confused or anxious, it might be a good idea to check in with them to make sure there aren't any communications issues.  It can be embarrassing to stop someone who is obviously on a roll to say "Slow down!" or "I'm having trouble understanding you."  Take responsibility for being understood yourself -- it really goes a long way.

Keep your mouth clear

We all know that it's rude to talk with your mouth full.  It also makes it harder to make out what you are saying, so don't do it.

But there's more to this.  Try not to cover your mouth when speaking.  You might have a habit of hiding your smile behind your hand or stroking your chin thoughtfully when speaking.  Be aware of these and make sure you avoid them!  Not only do they muffle your voice, but they hide your lips.  Visual information (in the form of mouth movement and shape) is extremely important in understanding speech.

Don't laugh while speaking

This goes along with the previous point, but is so easy to forget that I wanted to call it out specifically.  If you are laughing, stop speaking while doing so.  It's fine to laugh (or cough, or clear your throat, etc.) but don't mix it with speech.  If you need to wait a few extra seconds to keep going, that's fine -- much better to take a breath than to make your listener try to decipher something like "and then ha-heh-the-heh bar-heh-ten-ha-ha-der says 'why-he-he the-ha lo-ha-ong-heh fay-hay-hay-ce?'".

Keep your sentences clear and direct

I put this one last because I think it's the hardest technique in the list, so starting with the others will get you more effect for less effort.  But that doesn't mean it's not important!

What I mean is to try to use a consistent structure in your sentences.  Aim for the typical subject-verb-object word order.  Don't use too many dependent clauses.  Avoid dangling modifiers.  Basically, all the stuff you learned in grade school English classes and then promptly forgot.

Avoid run-on sentences and "filler" words.  Listen to your own speech.  Do you find yourself saying "ummmmmm" when looking for the right word?  Or things like "and then I saw the man and then he walked over to me and then we shook hands and then we started to talk and then..."?  This nonstop wall of sound is really cognitively taxing -- you're loading up a lot of extra work on your listener, forcing them to determine which words they need to mentally filter out and which words they actively need to translate.

So take pauses!  Breathe between sentences and let them sink in a bit.  If you need to think of a word, just think rather than filling the space.  You'll be much easier and more pleasant to listen to.

Be a better speaker

This advice applies to when speaking to native listeners too, by the way.  Listen to some TED talks or other really good speakers.  You'll note that they take pauses, speak at a moderate pace with clear syntax and diction, and don't "ummmm" or "uhhhh".  Learning and internalizing these techniques will make you easier to understand and a more compelling speaker.

(Of course, you could always try the opposite but I wouldn't recommend it.)

Saturday, February 13, 2016

Building a home automation system the easy way

In a previous post I talked about the idea of "cheap code" to go with cheap hardware.  Not every program has to be useful to the entire world or fit for general purpose -- it's fine to have a simple, ad-hoc script to do what you need to do quickly.

I've been using this idea to build a simple home automation system for the connected devices in my house.  The code isn't elegant.  There's no user interface to add new devices or change settings.  It's not even secure in any way.  But it works and I have it now, and if I were aiming to make something I could box up and ship as a real product, I'd still be working on it.  (Actually, that's not true.  I'd lose interest and never finish it.)  But even if the code I've written isn't generally useful, the structure and ideas could be.

So with that in mind, here's a sketch of how all the pieces fit together.


Logical overview of my home automation remote system
At a high level the architecture is pretty simple.  At the center running the whole show is an Onion Omega, a cheap wifi-enabled computer made for prototyping and projects like these.

The Omega is running a webserver and PHP to serve up the remote control interface and process commands for the projector, which is connected via a USB serial interface.  My stereo and lighting system are on the local network and can be controlled through HTTP requests.

The interface

The remote control interface is just a simple HTML page, not even processed through PHP.  All of the actual control is done through Javascript, which means almost everything is actually executing on the device I access the remote control interface through (e.g. a phone, tablet, or laptop).  This gives me a few advantages.  I don't need a lot of processing power or memory on the Omega, as it's really doing very little as far as actual control goes.  But it also lets me control devices and have a dynamic interface with no wait for a round-trip request to the Omega's webserver.  The entire interface is loaded on the first connection, and then parts are simply displayed or hidden in reaction to what the user presses, so there's no frustrating lag after controlling one device before selecting another.

One other advantage that doing the whole thing in Javascript gave me was a complete accident.  I realized after I'd started writing the remote control that I could have different looks or skins for it but share the same underlying code.  This lets me develop and test in a bare-bones HTML interface but then get really silly for running it on the tablet in my living room.

Yes, that is a Star Trek LCARS interface.  And yes, I am a geek.  The CSS is available on Github.

Javascript for REST devices

Both my Denon stereo and Philips Hue lights are controlled by sending an HTTP request to the device.  Since I'm executing the control through Javascript, the easiest way to do this is to abuse AJAX, the functionality built into Javascript to allow it to make asynchronous requests to a webserver.  Usually this is used to get data from a remote server and update something on a page in the browser.  But in my case, all I want to do is send a bit of information to a device and I don't care what the device sends back.  In fact, if it sends me anything, I'm just going to ignore it.

(Remember, this is all about doing things cheaply and quickly!  If my stereo gets confused by something I send it, no one will get hurt or lose money.  The worst that will happen is that I have to walk across the room and turn it on by hand.  So why fret too much about checking for errors?)

The stereo is the easy one.  It just requires an HTTP GET of a specific URL to set it to what I want.  It does return some status information, but again I just ignore that.

function setStereo(status)
    var xhr = new XMLHttpRequest();
    var url;
    switch (status)
        case 'on':
            //url = 'MainZone/index.put.asp?cmd0=PutSystem_OnStandby%2FON';
            url = 'MainZone/index.put.asp?cmd0=PutZone_OnOff%2FON';
        case 'off':
            url = 'MainZone/index.put.asp?cmd0=PutSystem_OnStandby%2FSTANDBY';
        case 'mute':
            url = 'MainZone/index.put.asp?cmd0=PutVolumeMute%2Fon';
    xhr.open('GET', 'http://0005CD3A5CE1.udlunder/' + url);
The Philips Hue system is only slightly more complex.  It requires an HTTP PUT rather than a GET, and expects a JSON document.  While you can control color and brightness of individual bulbs, I just made some presets stored in the Hue itself for different lighting scenes and so only need a very simple JSON document to recall them.

function setScene(scene)
    var xhr = new XMLHttpRequest();
    xhr.send('{"scene":"' + scene + '"}');

Controlling the projector

Probably the ugliest part of the whole thing is the projector.  It doesn't have a REST interface like the other devices, but it does have a serial port.  This is intended for industrial applications or control of the projector when it's mounted somewhere inaccessible, but it works just as well for people like me who are too lazy to keep track of yet another remote control.

This is the only place where the Omega is doing anything more than serving static web pages.  The PHP server on the Omega is used only to receive a projector command and pass it to a shell script that runs it.  The PHP script is truly ugly, doing no error control or security checks -- it's a great way for someone to take control of my Omega or run malicious code there.  But frankly there's nothing of value stored on it and to attack it they'd have to be on my local network anyway, meaning I have bigger problems.  So I'm okay with this.

    system("stty -F /dev/ttyUSB0 115200");
    system("/root/projector.sh " . $_GET["c"]);

(That's it -- the entire PHP script is two lines of code.)

Putting it all together

These individual pieces are all simple and useful, but combined they're fantastic!  I can reuse those Javascript functions all over my remote control and have one button do many things.  It looks more complicated than it really is when you diagram out what's going on inside:

Flow to set up multiple devices for a "movie" preset from a single remote control button press

There are ways to build dynamic Javascript code, but again I took the easy way out.  I made a short function for every preset that I wanted and just call the appropriate functions.  For example, when I select the "Movie" button, the lights are dimmed, the stereo and projector are turned on and both are set to the proper input:

function scene_lr_movietime()

(I broke the light settings into a separate function just so that I could have another button on the remote that would change the lights without turning on the projector or stereo and not have to write more code.)

Wrapping up

That's really it.  Nothing here is what I would call robust or beautiful, but it all works and was pretty easy to do.  It's a great sort of project to tackle if you want to learn more about the Internet of Things or find a use for some of the cheap hardware that's available today.  And these components can be further built on as you add new devices or want to add new functionality.

Monday, January 4, 2016

Cheap Code for Cheap Computers

The Free/Open Source Software movement has made me a perfectionist, but that's not always a good thing.

When I look around at the programs I can download and use for free, I'm frequently amazed.  I can have everything from word processors to webservers up and running with a couple of clicks.  And what's more, these are incredibly well-written -- I can look at the code and it looks much nicer than anything I do!

This often stops me from writing things to scratch one of my own personal itches.  I start by thinking how great it would be to have a little app on my phone to turn down the lights, turn on my stereo and television, and get everything ready for watching a movie.  But when I start writing that, I start generalizing.  "What if someone wanted to change the settings on the lights?  Or if they have a different television brand with totally different controls?  And of course there needs to be a way to autodetect what devices are on the network and set them up..."  By the time I get to this point, I usually decide it's more effort than getting up and turning down the lights myself.

But there's something different about it when I am playing with IoT devices.  When a Raspberry Pi is so cheap that it's given away for free with a magazine, suddenly making a quick and dirty button to do only what I need doesn't seem as bad.  Somehow having a computer so small and cheap that I could literally lose it and not even notice makes it okay for me to write ugly code just for myself.

So I've started doing that!  I'm not even putting most of it up on Github.  Sometimes because it really is embarrassingly bad, like the webservice I made to control my projector: a PHP script that just takes a request and passes it straight through to a shell script with no error checking or security.  But sometimes just because I'm too lazy to write the more complex, general case code and want to just put IP addresses or light bulb IDs directly into a bit of Javascript and be done with it.

The result of this is something magical: I finally have my little remote control app, and it didn't take long at all to write.  (I even have a button to turn the lights in my room a deep violet, turn on the lava lamp, and start playing Barry White.  Just in case.)

I may eventually clean some of this up and publish it for others to use, if for no other reason than as an example of how it can be okay to not worry about doing things the Right Way™ on occasion.  But for now, I'm enjoying the fact that what I wrote is actually useful to me!

Saturday, December 19, 2015

Exploring the Onion Omega

Some backstory (or the importance of being engaged)

I'll be honest, I felt a bit burned out on new small Internet of Things devices after my previous experience with the Belleds Q.  The Q shipped, but was immediately abandoned by the development team and left to die.  I even tried to help keep things moving by creating some projects that I documented here and setting up a forum called IoTTalk to bring the folks hacking on the Q together, but it wasn't enough without the support of the project's founders who moved with no forwarding address.

But a few months back, I joined in on the Kickstarter for the Onion Omega, a platform for prototyping and building Internet of Things devices.  I'm happy to say that my experience with Onion couldn't be more different!  The founders have been incredibly involved and responsive, interacting with the community on a daily basis.  Seeing what people are already doing with this new platform and being part of the community is truly exciting!

Now onto the Onion Omega

The Omega is a tiny development platform based on an ARM System On a Chip (SOC).  It includes a 400MHz processor, 64MB of RAM, 16MB of onboard flash storage, WiFi, and a number of expansion pins that can be used for a multitude of purposes, all costing about $19.  It barely sips power, being able to run for several days off of a common cell phone external battery via its USB power input.  Some existing expansions provide USB ports, Ethernet networking, relays, and even a tiny OLED display!

The Omega itself is about 44mm/1.75" along its longest axis, or about 12mm longer than a standard SD card. 

The best part for me is that it runs Linux natively.  I've been hacking on Linux for a couple of decades at this point and am usually more comfortable with a command line than a soldering iron, so having a platform that I can start using from a familiar environment is huge for me.

The Omega provides a WiFi access point out of the box and if docked to an expansion with a USB port also provides a serial console and provides both ssh/console access and a web interface.  This makes it easy to start using with no special tools or setup required.  It runs OpenWRT, a Linux distribution commonly used in wireless routers, meaning it has access to a wide array of software already packaged.  (At the moment, there's still some ongoing effort to recompile all those packages for the Omega, but even lacking existing ones the Onion team has great instructions and support for cross-compiling and pushing files over to the Omega.)

So what can it do?

Toy projects

I've done a couple of toy projects with the Omega already.  To play with the platform, the first thing I did was set up a script to poll the wireless networks within its range every 5 seconds and write them to a file.  I then plugged the Omega into an external cell phone battery and took a brief walk around my neighborhood to see what I would see.  The code for that and the simple (and ugly!) parser is in a repository on Github.  (It can even display the current summary status on the OLED expansion!)

I've written up a few more tutorials (mostly aimed at people with limited Linux experience) over at the Onion Community site.  For example, by installing a single package and using simple shell commands, you can blink the onboard LED in a Morse code message.

Save some money

Slightly more usefully, I used some shell scripts and an Omega to act as a WiFi-enabled remote control for the serial port in my home projector.  This is hundreds of dollars cheaper than any existing wireless serial system I could find and will allow me to connect my projector to a home automation system.

An Onion Omega and USB to serial adapter plugged into my video projector allowed me to duplicate a wireless serial adapter that sells for hundreds of dollars.

The real beauty of this is that I didn't have to do any hardware hacking.  All of the functionality is exposed through Linux, meaning a few shell scripts or a Python program is all I need to use the functionality of these real-world devices.

What's next?

After I mount my projector on the ceiling and have a single button to dim the lights, turn on the stereo receiver and projector, and load up Netflix, I'll need a new project.  I wonder if anyone is looking at porting Snappy Ubuntu to the Omega...

(There were over 500 unique unsecured access points in range during my 20 minute walk, and an even larger number still using the long-deprecated WEP security system that would be basically effortless to break.  Upgrade your routers, people!)

Sunday, February 22, 2015

A bit more on the Q

I poked at the Q Station and my light bulbs a bit more today, and ended up with a couple of Bash scripts that might be fun or amusing. One of them is a very simple GUI for changing bulb colors, and the other sends Morse code messages by blinking the light on and off. (Thanks to my partner Andi for thinking of that one!)

Bulb names are configurable but I'm lazy.

Yes, it's just the Zenity color selection dialog.

These are quick and dirty scripts using Zenity, but they work (at least under a default Ubuntu install) and make playing with the bulbs a lot easier.   I haven't tested this on any other Linux distributions but as long as Zenity and netcat are available they should work.

What I hope is slightly more interesting is that I've tried to separate out some simple but useful functions for working with the Q.  I hope these will be useful to other people in playing with the hardware.  There are Bash functions for changing the color of the bulbs, turning them on and off, and building the JSON string that the Q expects via its UDP interface.  (It's painful and error-prone to do this by hand.)

The code is up on github and can be cloned with 'git clone https://github.com/mccollam/belleds.git'.  If anyone uses this or has any suggestions I'd love to hear about it!