Sunday, October 5, 2014

Circles of the Home Automation hell

Introduction

Just a week ago I knew nothing about IoT. All the buzz was meant (I thought) for someone else, but that changed when I bought a house. Suddenly, a strange desire was born in my mind: to connect entities (doors, rooms, speakers, etc) inside my house, to monitor and control them as a system. Thus, I started looking into Home Automation, and with each new iteration I was diving deeper and deeper into the hell of technological diversity.

 

0. Complete Solutions

There is a lot of companies (like this one nearby) offering to install their proprietary home automation system for a big sum of money on the case by case basis. That wasn't acceptable for me, because I wanted to know what my options were, and expand gradually.

 

1. Isolated subsystems

Some companies specialize only in a specific domain. For example, Lutron is known for its lights and dimmers, which come with remotes. Phillips Hue makes the best LED lights, also remotely controlled. Nest is a decent (and awesome looking!) learning thermostat. Going wider, Vera allows you to buy components that you need and attach them to the system dynamically. Naturally, I started asking myself why these things can't be organized together, what language to they talk to each other?

 

2. Wireless Protocols

It turned out, there is a lot of diversity in wireless means of communications. You thought that WiFi and Bluetooth are enough for everyone? Welcome to the real hell:

ZigBee - perhaps, the oldest protocol (outside of X10, which we'll skip). It's looking decent on paper: support for mesh networking, low power consumption, huge group of supporters. It is ISO certified, and you can find numerous open source libraries and protocol implementations. However, the devil hid in the details, again. There are two incompatible kinds of hardware: Series 1 and Series 2. There are different profiles of communication, and you can't easily mix and match those. Finally, I tried to look for something specific, like a temperature/humidity sensor, and it came short of options for the real "Buy" button to appear. It appears that Zigbee has some fundamental issues (maybe those I mentioned) that pushed the alternative developments.

Z-wave - the most available (in terms of hardware) protocol, which also claims to be very smart in design (mesh networking and such). It is really straightforward to find actual devices, but unfortunately difficult to program them: the API is opened only after buying the SDK and signing an NDA. Open-source implementations exist but seem to be rough and incomplete.

EnOcean - the european-origin protocol with the main focus on self-powered devices. There is not much out there to read about it, but the limited range of compatible devices is available from a single manufacturer. The promise of battery-less components seemed very appealing, but it got compensated by the price of those.

Other things I didn't research include Insteon, Bluetooth Smart, WiFi (for IoT), ClearConnect (used by Lutron), and Thread (used by Nest). The last one seems exceptionally promising, but no public information is available yet about it.

At this point, I realized that if I'm gonna control my devices, they had to use the same protocol. Alternatively, I found a market of cross-protocol universal hubs, such as Wink, Revolv, Staples Connect, and SmartThings. While they do allow using devices from different networks, I found only Wink providing an actual API, which I doubt works flawlessly. If you don't intend to be in full control, this may be your last stop (for good).

Another problem, adherent to most solutions, is the requirement of Internet access. While I do appreciate a mobile app to access my home system, I believe it should not involve a cloud server. The cloud is a privacy hole and yet another point of potential failure. My system should be as much self-contained as possible.

Thus, I wanted to go deeper... The hub, I imagined, could be a headless Raspberry Pi with a RF module (like XBee) that I'd program myself in Rust. Fortunately, Pi supports modules for all major RF protocols. The hub would host a website for online access and schedule my house in a 24/7 mode. The only problem was locking into a single protocol (and its API), because I didn't want to hook up and dig into multiple protocols at once.

 

3. Micro-controllers

I noticed that sensors without RF modules are very abundant and cheap. What if I connect them to my own network physically, by having a micro-controller and a RF module attached by hands? That perspective made me look into simple computing cores:

Arduino - the most known family of boards, featuring dominantly the AVR family of controllers. Sadly, LLVM (and hence Rust) does not support this platform as a target, and the cheapest ARM core (Arduino Due) is more expensive than Pi. If not for these factors, I'd go with Arduino in a heart beat, for it's rich documentation and wide community support.

STM32 family - the most open-source friendly ARM chips out there. Rust has a Zinc project of running on bare metal, that was developed for this chip. Prices start as low as 8$, and there are extension boards available.

Tiva C launchpad kit from Texas Instruments - the most impressive ARMs in terms of website navigation and availability of standard extensions. Bare boards start at 13$ and seem very solid, thus being my current choice.

Freescale Freedom Boards - the most diverse family of ARM chips. I found it difficult to navigate their website and to figure out what exactly I need there. Can't see any extension boards in particular, but I'm sure there are some. Prices start at about 16$, though these boards include some LEDs and switches for demonstration purposes by default.

 

Conclusion

As a starting point, I decided to order one of the low-end ARM chips, and try to get anything programmed to it in Rust. That will take me a while... I may end up contributing some code into Zinc, or even shifting to robotics after-wise, because that's where MCUs really shine. At the same time, I'm going to buy a Nest thermostat with a couple of Nest CO detectors for my family to enjoy while I'm tinkering with low-level stuff. If I ever reach the point where I could control my HVAC and other things remotely by my own program, I'll be happy to replace Nest with something dummier, or just hack it to get the root access to HW.

If thought about home automation, have some solutions installed, or are building your own hub in the garage - please share, as I'll be happy to learn from your experience. If you didn't care about IoT and my article made you interested - you are welcome ;)

Friday, August 15, 2014

Vision

Vision. I often heard this term, read about it, though I knew what this is. I thought, when there is a person in charge, who is a highly skilled engineer, he can see what others can not just because of his engineering capabilities and the fact he goes thinking deeply about the problems (not only existing ones, he is also in a constant search for the new ones). Apparently, it's not that simple.

It turned out, vision is not a consequence of engineering skills but something very different. It's an extraordinary ability to see the future, a hypothetical future of a product evolution. By seeing this, a revisionary can drive product development in leaps, thus making it a revolutionary progression.

It is important to realize this is not "just" about development time (or, the availability of shortcuts). Evaluation is all around us. We evaluate everything we are doing in order to learn from it and adapt. Others evaluate our work in order to figure out if they want to invest in it. Thus, being able to leap forward gives you an instant advantage in terms of evaluation outcome, which transforms the benefit of having a vision from quantity (of time) to quality.

I faced the vision problem when tried to design a large system, in collaboration within a team. I realized (for the first time) that it's not the engineering skills that we lack, but rather a clear vision over where we are going. It doesn't even matter if you are an architect, lead developer, or the god himself. If you got the vision, you'll be heard.

Aside from being mysterious, vision is still a human skill. I wonder if it can be trained, like any other human skill we have. What kind of activity would that be? I don't believe that by just thinking and brainstorming we get any better in visioning in general, we merely dig harder at a specific problem instead. There must be something more generic. Perhaps, playing the music?..

Sunday, December 8, 2013

3C Rules of Personal Development

Create
You know, do stuff. Experiment, hack, write programs, build houses. Creating means going against nature in some way: second law of thermodynamics rules out everything to go into chaos, while creation is about organizing matter or information. Creation is easy and natural behavior of children with their rich imagination, but keeping up with it when you get older requires dedication.

Collaborate
Working on something in isolation can be reasonable, but the potential of collaboration is greater. Discuss your ideas with friends, relatives, and even with random people on the Internet. Visit conferences to hear other peoples ideas, form work groups, and adjust your own goals. A well done argument may multiply each individual intellegence with regards to solving the target problem, when the idea is being ping-ponged between brains, evolving with each hit.

Complete
Creation process is an engine, and to keep working it needs to complete cycles. Finishing stuff gives you fuel to move to the next idea, and allows to draw right conclusions by analyzing the full cycle. Publish your work on the web, give other people a functioning product, not just a bunch of scrap and a github repo link. Receive recognition, push your goal bar higher, and move on.

Sunday, August 18, 2013

Quest for the best scene format

A graphics engine needs to know how to compose a scene from a given set of resources, such as: meshes, skeletons, and textures. This is what scene file is for - it's a document that describes relationship between basic resources, joining them into a system that can be effectively processed by the code. This file is composed either by hand (if the scene is small), or by the exporter script from a 3D modelling program. During the evolution of KRI engine the scene format changed several times. I'll try to review the development history and analyze various formats I used, based on the personal experience.

0. Composed in code: kri-1, kri-2

This is where we all start: just slapping entities on the screen directly.
Lang:    C++
Pros:
    -no export stage
    -no need to validate
    -no parsing
Cons:
    -non extensible

1. Custom binary: kri-3

All scene data and resources were stored in a single binary file of a custom format.
Lang:    Boo
Pros:
    -no external libs
    -fast parsing
Cons:
    -non human-readable -> difficult to debug
    -difficult to validate
    -resources are not separate

2. XML: kri-web

XML is a well-known document format, it has the greatest language/tool support. Besides, that's what we used at my former employer company.
Lang:    Dart, XML
Pros:
    -built-in validation with Schema
    -support for default values
Cons:
    -need to keep Schema synchronized with exporter/loader
    -too verbose
    -bloated loading code (no 1:1 data representation)
    -not clear what to put into attributes -> design ambiguity

3. JSON: claymore-engine

This is where I discovered JSON, and it immediately appealed to me because of the simple syntax and its 1:1 reflection with the data. Fortunately, this is the only format Rust had a built-in support for. However, it turned out to be a poor choice for the scene description due to the lack of heterogeneous structures.
Lang:    Rust, JSON
Pros:
Cons:
    -no heterogeneous structures -> difficult to read

From there I started looking for something like JSON but to describe the document instead of the data. I looked into YAML, which seemed nice, a bit more complex, and not supported by Rust. Then I found Candle Object Notation, which seemed like a non-ambiguous and compact version of XML, plus the 1:1 mapping to data. However, the format is not that well documented and supported... "It would be nice to have the same object initialization syntax as Rust" - I thought when this idea hit me like a train: why not use Rust then?

4. Rust: k-engine

Let's just export a rust source file, which will be compiled with the application.
Lang:    Rust
Pros:
    -free validation (your code is the spec)
    -no run-time parsing -> instant loading, no run-time errors
    -no need to learn new syntax
    -compact (no external file needed to run)
Cons:
    -need to compile the scene
    -bound to the language

This approach seems to be the perfect solution for my scene format. The only thing that worries me is that it depends on Rust compile times. Though, we can still parse Rust code at run time, if we want, while still verifying it at compile time. You can see an actual export result here. It is compact, easy to read, and elegant.

Saturday, January 5, 2013

My Internet

I've been accustomed to the Internet as we know it: fire up a browser, read email, feeds, visit FB/G+/Twitter, buy something on Amazon with a Visa card. It is indeed convenient, especially if you don't care to look under the hood, or explore the limits of your freedom. And the fact is: there are big companies there (providers of "free" services) that gather all information about you.

One of the ways to use that information is to chose the advertising that you'll see. Honestly, I don't care about ads too much. Most of the time I block them anyway, and when I see them - I'll appreciate a featured anime figure more than some silly pills. But ads are just the tip of the iceberg, the only part of it we actually see. The real problem is the power you give them, the power that limits your potential, because no one cares about your weird habits until you become big.

In an ideal society, everyone can know everything about everyone. But we, as a species, are not ideal, thus making it a matter of protection to choose what information to share, and what to hide. Once your information can be bought, you never know who and when may turn it against you. Imagine a robber aware of your vacation schedule. The security question stands right near the privacy one. Your information is stored in a centralized manner: it could be denied of service, or it could be stolen - it's vulnerabile.

Now, how do we work around that, while still keeping it simple and convenient? There are several solutions for different sub-issues:

DNS. Generally provided by your ISP, thus may have some areas blocked (i.e. Wikileaks). They know wherever you go, and also redirect the "address not found" queries.
Solution: OpenNIC, any neutral DNS like Google DNS

File sharing. Exchanging music, books, and movies is prosecuted by RIAA & MPAA, even if you give it to your friend and delete it locally. They want you to rent things for an undefined period instead of owning them.
Solution: Torrent, private hosting if you can afford it.

Social network. This is where you expose the most of yourself. You need to preserve the rights to the content you create, and to share it only with those you care.
Solution: Disapora*.

Money transfer. Your Visa/AMEX/Mastercard knows everything you buy, everywhere you travel, and steals around 3% for each transaction. Also, you never know when your government decides to print more money, making whatever you have less valuable instantly.
Solution: Bitcoin.

Those solutions will only become valid once they gain a critical mass of users. I hope that my post aids this goal a little, making Internet a better place in the nearest future.

Sunday, November 18, 2012

What I know about Computer Graphics

I've been working closely with CG both professionally and as a hobby for the past 5-6 years. I've been making games and developing engine architectures. Latest developments can be tracked on Claymore Dev blog. I've seen different techniques, tried many others, even written articles about them in big books (GpuPro-3, OpenGL Insights). And the funny point is: I still don't know how to build engines... All I know is how bunch of known techniques may help or screw you up, based on personal experience.

Uber-shaders
Problems: Difficult to maintain due to bad granularity and monolithic approach. Unable to extend from the game side.
Alternative: Shader compositing. In OpenGL you can extend the functionality by either linking with different shader objects, swapping the subroutines, or by directly modifying the source code.

Deferred shading
Problems: Very limited BRDF support. High fill-rate and GPU memory bandwidth load. Difficult to properly support MSAA.
Alternative: Tiled lighting. You can work around the DX11 hardware requirements by separating lights into layers (to be described).

Matrices
Problems: Difficult to decompose into position/rotation/scale. Take at least 3 vectors to pass to GPU. Obligation to support non-uniform scale (e.g. no longer skipping invert-transpose on a 3x3 matrix to get a normal matrix).
Alternative: Quaternions and dual-quaternions. Both take 2 vectors to pass.

Context states
Problems: Bug-hunting is difficult because of bad problem locality. Assumptions over the context are easy to make, but if you decide to check it with assertions, why not just pass the whole state instead?
Alternative: Provide the whole state with each draw call. Let the caching work for you.

C++
Problems: Memory management and safety. Compiler-generated copy operators/constructors. Pain dealing with headers and optimizing the compile time. Many many lines of code.
Alternative: Rust. Other "safe" languages (.Net family, Java, Python) are not as low-level and often trade performance for safety (i.e. global GC phase causes an unacceptable frame rate interruption).

All I actually know is that there is a thousand and one difficult architectural issues of the graphics engine, and there is no silver bullet to most of them. For the most common solutions I listed possible alternatives, but they are no near being flawless. I hope that one day the amount of experience I get will magically transfer into the quality of my decisions, and I will finally know the right answers.

Thursday, November 1, 2012

Rust

Today's early morning I woke up with a single thought reflecting loudly in my brain: "Dart was a mistake, it was not made for me. I should look for some statically typed practical language instead". Even though my KriWeb project (written in Dart) was not actively developed, I agreed (with my dreaming counterpart) that the instrument I choose for this project iteration is far from perfect. Suddenly, I felt the urge to look for something ideal, something that seemed so real as if I was reading its specification the other day... And I just needed to recall its name...

I started looking for it on the web. There were many interesting suspects among new languages. Ceylon, for example, features immutability by default (which highly encourages functional style), which seemed very familiar and close to what I looked for. It is a very nice language all in all, but it's currently running on Java VM, and was heavily inspired by it, what pushed me off. Go sounded attractive due to the strong support from Google, however disappointed me by its lack of user generics. Zimbu looked too original, while Haxe seemed to pretend covering too much use-cases. I've reached  the 5-th page in google search results, and there still wasn't any trace of it. Maybe it was a dream?..

One step away from stopping my search, I stumbled upon this Holy Grail of programming. Name is Rust, developed by Mozilla Foundation. Suddenly, I remembered this shining website interface, this clear language specification that I read a while ago. I found it, at last! Let me explain why I was so happy:
  • Strong static typing with inference, only explicit mutability. This is so right and so rare to see at the same time. Unlike Dart, most of my mistakes will be found at compile time.
  • No page faults while still compiling to native code. Memory model is protected and guaranteed to work without access violations under normal circumstances. It has a potential for C-like performance, hence being a better tool for various tasks.
  • User generics with constraints, pattern matching (Haskell-style). Yes, it took the best from my beloved purely functional language.
  • Less statements but more expressions and closures. This makes it even more sleek and functional.
  • Syntax extensions. Hello, Boo macros!
  • Structure compatibility with C. Using external API's (i.e. OpenGL) gets easier.
Overall, the language and its environment seem very nice. It is simple yet powerful, and feels very promising. I'm looking forward to work closely with this gem, and I'm very excited :)