Here's an interesting post I ran across at the Smart Economy forum. It's interesting because it deals with wireless sensor networks as viewed by economists which is much different than WSNs as viewed by people in the industry. They present four scenarios of how WSNs will play out (Note: IoT stands for "Internet of Things"...I's heinous...):

Scenario 1: Fast Burn

In "Fast Burn" the IoT develops rapidly but in a limited fashion, and fails to sustain its momentum. Although impacts become quite significant in particular application areas (industrial automation, health care, and security), the IoT doesn't fulfill the promise of becoming pervasive (and thus is of limited importance to everyday lifestyles, business operations, and the conduct of government).

Scenario 2: Slowly But Surely

In "Slowly But Surely" the IoT becomes pervasive, but not until 2035 or so. Outcomes are somewhat similar to those of "Ambient Interaction," but there are substantial differences. The relatively slow development of the technology gives businesses and governments time to assimilate developments, allaying the most disruptive risks.

Scenario 3: Connected Niches

In "Connected Niches" the IoT evolves along application pathways that promise rapid payback and that can overcome resistance and indifference. Demand is commensurate with evolutionary but not revolutionary cost reductions, moderate technology progress that leaves some problems largely unsolved. Industries show reluctance to fully collaborate.

Scenario 4: Ambient Interaction

In "Ambient Interaction" the IoT arises rapidly and pervasively, favored by technology progress, business collaboration, and innovation-friendly policies. Strong demand arises across several major sectors of the economy, as technological wizardry combined with creative business developments stimulate people's appetites for killer applications that reduce labor and tedium, confer peace of mind, and blur the lines between work, play, and commerce.

In my opinion, I think that scenario 3 is playing out right now where certain industries are adopting WSNs. It's easy to see the pairings that have occurred: Zigbee and smart meters, Z-Wave and home automation, Wireless Hart/ISA100 and industrial networks, etc... (Aside: Hope to add the pairing for 6LoWPAN soon...). Those are the current "application pathways" that I think will ease people into the concept of WSNs. However I'm pretty sure that it won't become pervasive for a long time. The reason is that people just don't like to change. It took me forever to give up my pager and actually buy a cell phone. I can't see my mom setting up a WSN anytime soon, or even a network at all. Actually, I had to set the network up for her and every time I go back to the US, she has a laundry list of computer things for me to fix.

Once people get comfortable with the concept of WSNs in the initial applications, I suspect that the applications will slowly branch outward. Whether or not they'll become ubiquitous, pervasive, ambient, or any other BOTD (buzzword of the day) remains to be seen. But I feel confident that they're going to change the industries that they touch.

Zigbee 2007 (Residential) has two methods of routing: mesh routing and tree routing. Although most people talk about the mesh routing capability of Zigbee, not too many people know much about the tree routing. There's actually a good reason that it isn't discussed much and I'll get into that later. However I do receive the occasional question on how Zigbee tree routing works. Mostly, its from people that want to run Zigbee using the 802.15.4 beacon mode since tree routing is the only routing method supported in this case.

In order to understand tree routing, its first important to understand the underlying principle which is how addresses are allocated. Once that is understood, the routing becomes trivial.

Tree Address Allocation

Tree routing works by routing frames based on the addresses of the routers in the network. In order to do this, the routers that join need to have their addresses allocated in a special way. Although the address allocation scheme can be summarized in a handy little algorithm called CSkip, which can be found in the Zigbee spec, it's probably best to explain qualitatively how it works.

Actually, the allocation scheme isn't very sophisticated. You start off with an address space that is calculated based on the maximum number of routers per device (Rm), max children (routers + end devices) per device (Cm), and max depth (Dm) of the network. To make things simple, lets just take the case where the "max children = max routers" which means that all the children will be routers and no end devices are allowed on the network. The simplification just makes it easier to calculate the address space. 

Based on the information above, we can calculate the size of the address space, since it will be the same as the maximum number of devices on the network. In other words, each device gets one address. As an example, let's say that the maximum number of routers that can be joined to a device is two, the maximum number of children for a device is 2 (hence all children are routers), and the max depth of the network is two. In Zigbee speak, this would be labeled (Rm=2, Cm=2, Dm=2).  This would give us a network that looks like this:

Basic Tree Network

From the picture, you can see that the maximum number of devices would be 7, hence the address space will span from 0 to 6. Also I purposely drew it in a tree structure (in this case binary tree) for purposes that you can see below.

The next issue is how to divide up the addresses between the devices. Although it makes sense to allocate the addresses based on a first-come first-serve strategy, that wouldn't give us any routing information. Thus the addresses are given out based on a tree hierarchy, which incidentally, is where the name comes from. Here is how the addresses would look for each device:

Basic Tree Network with Addresses

All of these addresses are fixed for the position in the tree that a device occupies. You can see that the numbers progress sequentially from top to bottom and left to right as you follow the flow of the tree. Hence the 1st address which corresponds to the first device on the network will be 0. This corresponds to the coordinator and the coordinator's address will always be zero in a tree address allocation scheme. The first device that joins the coordinator will have an address of 1 and it's children will have 2 and 3. The second device that joins the coordinator will have an address of 4 and its children will be 5 and 6.

In more abstract terms, when the coordinator starts the network with the given parameters, then its address space will span from 0 to 6. It takes the '0' address for itself and then splits the remaining addresses among the routers that can join to it. In our case, the first router that joins gets the first half of the space and the second router that joins gets the second half. The following illustration demonstrates how the addresses are split up based on depth. The boxes in blue are the addresses that get taken by the device that owns the space.

Address Allocation

Of course this allocation is only based on our simple test case with a maximum of 2 routers and a max depth of 2. Here's how the address is allocated if we change our parameters so we have a maximum of 3 routers and 3 children. We can't have more routers than total children since each router is considered a child. The depth is kept the same. 

Address Allocation - 3 Routers

In case you're wondering how I came up with the total address space, its actually quite simple. For the case with 2 routers, 2 children, and a network 2 deep, the total space is:

Total Space = 2^2 + 2^1 +2^0 = 7

For the case with 3 routers, 3 children, and 2 deep:

Total Space =3^2 + 3^1 + 3^0 = 13

If the number of children equals the number of routers, you can calculate the total number of devices from the following:

And for the general case where the number of children is not equal to the number of routers (ie: end devices also exist) the max number of devices and hence the max value of the address space is:

 where Cm = Max End Devices

That's basically how the tree address allocation works for Zigbee. Now that we know how the addresses are doled out, it will be easier to see how they are used for routing.

How Tree Routing Works

If I didn't lose you on the address allocation scheme, then the routing should be a piece of cake. Now that we've established that each device has its own address space based on its depth, then we can determine how to route a frame based on its destination address. There are only two directions a frame can go in tree routing: up the tree or down the tree. If the destination address is within the device's address space, it should be routed down. Otherwise, it will be routed up.

Here's an example of a frame originating from device 2 with a destination address of 6. Each of the arrows are based on comparing the destination address with the device's address space (except the horizontal one which I just used to show the flow). If the destination address lies outside the device's address space, the frame gets routed up. Otherwise it gets routed down.

Zigbee Tree Routing

That's pretty much all there is to tree routing. It is heavily dependent on the addresses of each device and if you understand how addresses are allocated and the concept of each device having its own address space, then understanding how the frames get routed is pretty simple.

The Problems with Tree Routing

Tree routing is a clever and simple way of routing frames without a complex mechanism like AODV or some of the other routing algorithms out there. The big benefit is that it doesn't require tables so you can use tree routing in networks where the devices are very contrained on resources, ie: memory. However there is a fatal flaw in tree routing that renders it almost completely useless in a real world network.

The big problem is that this routing mechanism is fundamentally unreliable because the addresses are static. The position in the tree hierarchy of a device is set once that device joins the network and receives its address and afterwards, it's fairly inflexible to any changes to the network structure. So if a device joins the network and fails or moves out of range, then it will also take out all the devices underneath it in the hierarchy. It is possible to recover from this situation if all of the devices are able to rejoin the network with another parent successfully, however this can be a painful and error-prone process and potentially involve many devices.

Another fatal flaw is that there is no way to recover if the coordinator fails. It is the single point of failure for this routing method. In mesh routing, if the coordinator fails, things are still okay because from a mesh point of view, the coordinator is just another router in the routing tables.

However in tree routing, the coordinator is the vertex of the tree, where all frames must pass if the destination device is not located on the same branch as the source device. If the coordinator fails, then all the main branches become isolated and your network is pretty much useless. Currently, there's no place in the Zigbee spec (that I'm aware of) that describes how to recover from this failure.

From a reliability standpoint, tree routing is fairly dangerous. In fact, it was probably dangerous enough that the Zigbee Alliance decided to remove tree routing and addressing in the Zigbee Pro spec and I suspect that in the future, it may get taken out of Zigbee completely. In the FreakZ stack, tree routing is supported, but it's the frame forwarding method of last resort if all other methods fail.

Well, I hope this little blurb on tree routing was helpful. I don't really mean to knock on the algorithm since I think its pretty clever, but unless the Zigbee spec can address some of its reliability flaws, I would consider it dangerous to use. I guess what I'm implying in this article is that Zigbee shouldn't be used in beacon mode.

How do you like them apples...

I'm back once again to excite you with the details of the IEEE 802.15.4 spec. Actually, it's not so boring if you know what you're looking for. But if you were like me, wading through the endless SDL diagrams and trying to make sense of the bizarre acronyms, then yeah, it's a real snoozer. This will be the last in the three-part 802.15.4 series and should be quick. It's just going to be a quick summary of the MAC layer service functions defined in the spec. As I mentioned before, this will only explain the service functions as it applies to Zigbee.

In this part, I'm going to be going over the 802.15.4 service interfaces, since the spec can make it rather cryptic; especially if you see something like MCPS-DATA.indication. Instead, I'm going to try and explain what the data and management services are, how they're used, and try my best to avoid the spec formality so that it's easier to get an implementation view of things.

If you look at the MAC layer block diagram in the 802.15.4 spec, you're going to see a lot of weird terms and acronyms. An example would be the entry ports to the layer. These ports are called "MCPS-SAP" and "MLME-SAP" respectively. Ugh. Technically, that stands for the "MAC Common Part Sublayer - Service Access Point" and the "MAC Layer Management Entity - Service Access Point". Unfortunately, I've never seen any use for those ports. When I designed the MAC layer of the FreakZ stack, I found it easier to call the functions directly. In other stacks that I've seen, they either call the functions directly or use function pointers that serve as the API for the layer. Going through a level of abstraction like ports would just add overhead and make a stack bigger.

Before I get too distracted, let me get into the real meat of this post. The MAC layer is actually divided into two functional partitions: Data handling and Device/Network management. The data handling portion can be thought of as the data path and the management portion can be thought of as the control path. The data path is pretty straightforward but should be optimized since it will be accessed the majority of the time. The management (control) path is not accessed very frequently, but can be complex and must be implemented. At least part of it anyways. With no further ado...

MAC Data Service

Data Request

This is for outgoing data frames and can be more appropriately called mac_tx. The Zigbee network layer will call this function once it has decided where a frame needs to go (based on its routing tables, etc). All of the required info will be provided such as the source and destination addresses, PAN ID, and the transmit options so this function basically just needs to format all that data into the relevant MAC header and slap it on to the front of the Zigbee network frame. Once the header is assembled, the final touch will be to add the frame length to the front of the frame (ie: the PHY header).

Once the frame is assembled, there are actually two ways to send it. If its going to another router or an end device whose receiver is always on, the frame will be sent directly via the radio. Otherwise, if the destination is a sleepy end device, the frame will need to be sent as an indirect transfer. The frame will go to the indirect queue until the destination device wakes up and polls the parent. Once the poll comes in, the frame will get sent to the destination.

Data ConfirmThis is a separate function than the data request and usually runs asynchronously to it. Its purpose is to communicate the status of the transmitted data. If no response was received and the maximum number of transmission retries was exceeded, it will send a failure status. Otherwise, it will send a success status or some other code expressing any issues that occurred during transmission.
Data Indication
This is for incoming data and can be more appropriately called mac_rx. Usually an indicator will be set in the Rx ISR that a frame came in and will trigger the call of this function. The MAC code will then start to process the inbound frame and send it up the stack.
Not required for Zigbee.


MAC Management Service

The association function is used to join a node to a parent. Before a node is allowed to communicate on the network, it must go through the association process and join a parent that is in the network. Zigbee uses the association service in its NWK_JOIN service.
DisassociationThis would normally be used to remove a node from its parent on the network. However Zigbee doesn't use this service and instead defines its own command frame called NWK_LEAVE which is sent as a data frame. 
Beacon NotifyThis service is used to inform the Zigbee layer that a beacon has arrived and requires processing. Zigbee uses information in the beacon to communicate router/end device capacity and whether or not they are allowing joining.
MAC PIB Get/SetI have a hard time seeing how these should exist as services since they basically just manipulate a data structure that holds the MAC settings. In my implementation, I just manipulate the table directly via pointer. 
GTSUsed for reserving guaranteed timeslots in beacon mode. Not used by Zigbee.
Orphan NotificationWhen a device loses its parent, it's considered an orphaned device. There are a couple ways that this can happen, but normally, it occurs through a failure on the parent or if the end device is mobile and it moves out of range. If a device is orphaned, then it will do an orphan scan by broadcasting an "orphan notification" command frame in the hopes of finding its parent. If the parent gets the notification, it will inform the device that it's still there and the orphan can rejoin that parent.  
ResetResets the MAC layer and sets all values to their defaults in the PIB.
Rx EnableNot used for Zigbee. Zigbee doesn't specify transceiver control except for routers which must always have their receivers on. End device power handling is dependent on stack architecture and requirements. Transceiver control is also usually implemented at the driver level.
ScanThere are three types of scans that are used in Zigbee: energy scan, network (active) scan, and orphan scan. The energy scan is used during network formation to select a channel with low noise on it. The network scan is used during network formation and network discovery. During network formation, the channel is selected based on the noise level and the amount of other networks on the channel. For network discovery, the scan active scan is used to find other networks to join. The orphan scan, as mentioned above, is used for an orphaned device. It consists of broadcasting an orphan notification frame and waiting to see if the parent responds.
Comm Status IndicationThis is mostly used to inform the Zigbee network layer that an event occurred in the indirect transaction queue. Indirect transactions are transactions that are buffered and require the destination device to poll the parent to retrieve the frame. If a buffered frame is sent to its destination successfully, a comm status indication is sent with a success code. Otherwise, if a frame times out and is purged, then a comm status indication is also sent out with the relevant code.
StartThis is used to start the MAC layer and initialize the device. It is also used to change certain settings in the MAC layer such as the superframe configuration. In Zigbee, it's just used to start the device and is normally called after a MAC reset.
SyncNot used in Zigbee unless on a beacon enabled network.
Sync Loss IndicationNot used in Zigbee unless on a beacon enabled network.
PollUsed by the Zigbee nwk_sync request to poll the parent for any pending data buffered for the device. Calling the poll function will generate a data_request command frame and set off the indirect transaction sequence.

That should wrap it up for this series on IEEE 802.15.4 in the Context of Zigbee. From here, I'll probably be starting up the next series which is a detailed look at the Zigbee layers and how it's implemented.

Finally, the weekend comes and I can set aside time to write blog posts without feeling guilty about neglecting the software. Things are busier than I expected, even during this holiday season. I was hoping that things would slow down so that I could focus this month on developing the hardware implementation of the FreakZ stack, but unfortunately, it was a busy month. I'm going to have to reduce my consulting time next year since I've decided that my number one priority next year will be on the FreakZ stack and building it out.

A lot has happened in the WSN community in the past few weeks, but one of the most surprising was that Zensys was acquired by Sigma Designs . It even caught the Z-Wave Forum admins off guard . I'm also sure it caught a lot of companies by surprise, since it kind of throws a cloud over Z-Wave. The issue is that the Z-Wave spec and software is controlled by Zensys, which is now controlled by Sigma. So I suspect that a lot of companies that adopted Z-Wave are waiting to see what the position of Sigma is on it.

Personally, I doubt that much will change initially since Z-Wave is starting to build momentum and Sigma's main goal is obviously to sell chips into the community that Zensys built. However I think that this is a good example of how a proprietary standard built on top of a proprietary architecture can be dangerous. Although Z-Wave alliance members can propose new device classes and spec modifications, it was ultimately Zensys that controlled both the software and what went into the specification. When a change of strategy or ownership occurs in the controlling company, which it just did, then adopters are in a weak position and can only wait and see what happens.

Although I said the spec will probably be okay for now, the dangerous thing is that Z-Wave is not a core focus of Sigma, which specializes in video streaming chips. If Z-Wave starts to lose market share to other protocols such as Zigbee or 6LoWPAN, then it's quite possible that support for the Z-Wave chips, software, and/or specification may cease to exist. So the main issue is what happens a few years from now, and whether Sigma can turn Z-Wave into a profitable venture.

Most people that know me or have been following the blog know that I'm biased against proprietary standards and protocols (and software for that matter). I still believe that it's safest to adopt protocols that are built on open standards. Z-Wave in particular has two critical risks that adopters need to accept in order to use it: RF radio supply and software. If either of those disappear, then the spec is useless.

If Z-Wave had used the 802.15.4 radio from the beginning, then there would be no danger to the supply of RF chips, since there are multiple suppliers for the radio. As can be seen from the recent NY Times article on Meshnetics , software is also a critical point. Since the Zigbee company let go of most of their software engineers, maintenance, support, and the future roadmap of the Meshnetics Zigbee stack is in jeopardy. In that case, it should have been requested to have the software put in escrow in case anything happened to the company.

The main point is that there are risks associated with anything proprietary and the Zensys/Sigma deal serves as a good example of that.

Hmmm...this post took on a completely different direction than I anticipated... 

When the economic news is too unbearably depressing to allow me to even write software, I turn to the few things that I know I can count on: my Groove Salad streaming MP3 radio station (legal, of course), a glass of wine, and the tutorial series on 802.15.4 and Zigbee that I have yet to finish.

When we left off last time, Jan was sleeping with Jim, the transsexual who goes by the name of….wait…no, we left off at the 802.15.4 PHY layer. Well, the PHY layer is pretty straightforward unless you're an RF IC engineer, so now we're going to get into the real meat of things…the MAC layer.

The MAC layer is where all the fun is in terms of a software protocol stack. Actually, the 802.15.4 MAC is fairly complicated and feature-packed, which is why I named this series "802.15.4 in the Context of Zigbee". As the title implies, I'm only going to cover enough of 802.15.4 to understand how it fits into a protocol like Zigbee. Luckily, Zigbee eschewed a good portion of 802.15.4 in the name of simplicity and getting a spec out quick, so you won't need to know a whole bunch about 802.15.4. Unfortunately, this has also become a weakness of Zigbee.

Beacon Modes

One of the most glaring omissions of the Zigbee spec is that it doesn't use the 802.15.4 MAC in beacon mode (except for tree-only routing implementations which are impractical as I'll explain in some later article).

The MAC layer defines two basic modes of operation: beacon mode and non-beacon mode. Beacon mode is timing dependent, where a beacon frame is sent out at some set interval defined by the implementation. The beacon defines the start of a superframe which is basically the interval between the beacons, and is used as a way for the devices on the network to synchronize with each other. The superframe is divided into two parts: the active part where data transfers occur, and the inactive part where the device can go to sleep. For very low power operation, you can define the ratio of the active time to the inactive time to be very low so that the device spends most of its time sleeping.


In non-beacon mode, there is no concept of a superframe and the beacons are only used to discover what networks exist within a channel. In other words, beacons are only used when a device is first turned on and it scans for networks to join. Non-beacon mode is completely asynchronous so the upper layer protocol needs to treat each node as completely independent. This has certain implications, especially to power consumption. One of the biggest complaints about Zigbee is that the routers are not allowed to sleep due to the asynchronous nature of non-beacon mode. Since the routers never know if an end device is sleeping or not, it needs to always be on to receive and buffer frames for its children. The children will poll the router periodically to see if there are any messages buffered for that device. The fact that the routers are always on means that certain types of applications are non-feasible for Zigbee, such as applications where the routers do not have access to a MAINS supply.