by Patrick Murray of Controlhaus
Video projectors, matrix switchers, touchpanels – it doesn’t get more AV than that. But more often we see projectors replaced by flat panel displays, streaming used instead of switchers and a new breed of user interface pushing out our beloved touchpanels.
No-touch control through voice, presence, calendar integration and even QR codes is a subtle change that may be easy to miss if we don’t pay attention. Users want simplicity. And no-touch interfaces force the designer to boil down a system to essential functions. No-touch has the potential to bring the Jetsons experience closer than ever before.
The success of an automation system can be measured by how little user interaction is required. This idea is easy to overlook for an industry that has been focused on the touchpanel for so long. We love our slick graphics and strategically placed menus that account for every possible control scenario. While larger systems will always need that type of interface, most systems – way more than half – need just a handful of functions like power on and off, source selection and maybe some volume control.
The biggest change in user interaction has been voice control. Amazon really nailed it with Alexa and made the idea of voice control that works extremely popular. If you haven’t used Apple’s Siri in a while, you may be surprised how far it has come. I find myself telling Siri to do all sorts of tasks instead of searching through screens of endless app icons.
There are challenges with integrating voice services in a custom control system. Alexa uses skills to let developers integrate with the Amazon Voice Service. This can be cumbersome for the end user because they need to talk more, “Alexa tell my control system to start the presentation”.
But did you know you could make your own Alexa? This Github project shows you how. The most difficult part looks like the wake keyword. The wake keyword tells Alexa to start sending what you say to the internet for processing. It is also where most of the privacy concerns come up. So I say skip the wake word and activate the service with a button. You could even isolate the audio input when not in use and have an LED in the room indicating when it is on. We already got the mics, right?
While Apple recently started to open up Siri for developers, it is still too limited. Siri tries to figure out what the user is asking with Intents. Apple has made a few intents open for developers including VOIP, messaging, payments, lists, notes, photos, workouts, ride booking, car commands and restaurant reservations. No AV intent? Maybe with iOS 12 (don’t hold your breath).
Homekit would be the best way to use Siri with an automation and control system. But Apple quite reasonably has made security a priority with Homekit. That means a developer needs to register every device with Homekit through hardware. Manufacturers can join Apple’s MFi program and install the required chip in their products. HomeKit provides security by authenticating with every device it talks too. No special chip means no HomeKit. RS-232? Infrared? Relays? Siri no understand…
With many people carrying around a GPS receiver in their pocket, it is possible to control a system based on the user’s location. Or the location of their phone anyway. The main challenges here are battery life and privacy. If the user does not opt-in to location tracking with your app, then it just won’t work.
Reducing the accuracy of location detection is one way to improve battery life. This is acceptable when automation events do not need to be fired immediately when the user enters or exits a region. A good application is to set up a room or environment when someone is on their way to work. If the automation needs to happen exactly when someone enters a room, you’ll need to install some beacons.
Beacons use Bluetooth to send out a message at regular intervals. All they do is say, “I am here”. You can set the range from a few meters up to 100 meters. The shorter the distance, the better the battery life. Using multiple beacons increases accuracy. The user will need to have Bluetooth turned on and your app installed. But once they witness the world magically adjusting to their mere presence, they may never lose their phone again.
Of course you’ll need to spend some time thinking about what makes sense to automate. What if multiple people are using your app in the same space? Whose preferences take precedence?
The idea of controlling a room based on calendar entries has been around for awhile, but is not as popular as it should be. The main challenge here is integrating calendar applications with the control system. Instead of creating a new calendar app, systems should integrate with popular platforms like Office 365 and Google Calendar.
That kind of integration is a huge development project. Proprietary management systems tackle this problem with complicated server installations and programming. Online services like IFTTT and Zapier have the potential to make the task a lot easier. But they do not support custom applications like control systems. Not yet anyway…
QR, or Quick Response codes lost a lot of street cred because you needed a special app to use them. But the camera app in iOS 11 now has a built in QR code reader – putting the “Quick” back in QR. The most publicized use is helping users get on a public WiFi. Just scan a code for instructions.
Scanning a QR code could also open an app that controls the system. But why stop there? Let’s say you have a system that only authorized people should operate. A user scans the QR code, your app opens and asks for a passcode, Touch ID or Face ID and the system magically powers on with that person’s settings.
No touch control is here to stay. It cannot replace every graphical user interface, but with the push for simplicity and systems that just work, expect to see more alternative UI’s. Looking at the bright side, user interfaces will get approved much quicker when no graphics are involved.
by Patrick Murray of Controlhaus