Now that I'm back in the US, I'm trying to re-establish my rhythm again. I think it's important to have a rhythm for any type of large project undertaking. My sister once told me that Haruki Murakami, a famous Japanese writer , writes continuously every day for four hours in the morning. Once he finishes his writing, he doesn't need to think about it any more for the rest of the day. That four hours might not seem like a lot, but built up over a long period, it's what enables him to finish his usual 500+ page books without burning himself out. I believe it's exactly the same for coding, dancing, or even just working out in the gym.

I kind of pattern my work schedule after what my sister told me, although the coding time often wavers above and below the four hour mark. However if you think about four hours of unbroken coding time, it's something that is nearly impossible in a typical office environment. My experience at companies is that there are so many distractions inside a typical office workplace that even getting two hours of pure coding time is very challenging. When I was working in the cubicle farms in the US, I found that most of my time was spent just trying to seem busy, while I wasn't getting a lot of real work done. Glad I'm out of that life now.

Anyways, now that I'm back in Tokyo, I can settle back into my own schedule again. Being that its 4 am right now, I figure that I haven't fully recovered from my jetlag. It gives me a good opportunity to restart writing in my blog again and research the part of the stack that I'm currently working on.

As I mentioned before, I'm moving up to the Application Layer and the Zigbee Device Object (ZDO) implementation. The Application Layer of the Zigbee spec consists of three main parts:

  1. The Application Sub-Layer (APS): This is the workhorse that provides the data Tx services and routes the incoming data to the correct endpoint in the application layer. All endpoints (profiles) need to use the APS to communicate with other devices.
  2. The Application Framework (AF): This is kind of a nebulous portion of the application layer that isn't really well defined. The framework needs to define the endpoints and provide a means for each endpoint to register itself with the framework. Then, when a remote node does a service discovery on the local node, the application framework helps assemble the list of active endpoints, descriptors, and profiles.
  3. The Zigbee Device object (ZDO): This is a required endpoint on all nodes and always resides in endpoint 0. The ZDO implements the Zigbee Device Profile and has the following responsibilities:
    • Device and Service Discovery
    • Security Management
    • Network Management
    • Remote Node Management (usually for provisioning)
    • Binding Management
    • Group Management

Once the application layer is implemented, then I should theoretically have a working Zigbee stack. However the fun doesn't stop there. In order to implement the Zigbee standard profiles, you also need to implement the Zigbee Cluster Library (ZCL) as well as the actual profile implementation. If you have a good application framework, then I'm hoping that the profile implementation will be easy.

In any case, I'm currently working on the ZDO and once that's finished, gonna move to the Application Framework. Since the APS is done, that should give me a full stack. At that point, I can start cleaning up the code and prepping it for an initial release. As I mentioned before, the initial release will probably start at v0.5. A Full 1.0 release will need to be (relatively) stable and include security handling and an over-the-air bootloader.

Anyways, that's the plan for now... 

I'm still in California and enjoying a nice little break from development. I got to visit my brand new niece and eat some good ol' fashioned American food like tacos. 

I have been working on the stack here and there and was stuck at the binding and group tables implementation. Finally, I decided that I'm going to forego implementing the binding and group tables in the Application Sub-Layer (APS) and go straight to the Application Layer to implement the Zigbee Device Object.

The binding tables got changed from the 2006 spec to the 2007 spec so there is no longer coordinator binding. All binding is done on the source node, which is the node that is originating the frame. However there are still some discrepancies in the spec since they refer to certain binding fields that don't exist anymore. I also can't really see the actual usage scenario in my head so it's a bit hard to visualize the data flow. 

The group tables for multicasting at the APS layer finally got unified with the group tables for multicasting at the NWK layer. Yes, they previously had redundant group tables and different methods of multicasting. APS multicasting would send an individual frame to all members of the group, while the NWK layer multicast would send a broadcast frame and only members with the correct group ID would decode it. Anyways, it was one of the quirks of the Zigbee spec before.

I decided a while back when I was working on the NWK layer to hold off on multicasting. That's because I'd prefer to get some actual feedback on the stack from users so releasing early would be more preferable than completeness, at least initially. Once I get the feedback, I can use it to tweak things and also implement features like multicasting, etc...

So in that case, I decided to move on up to the ZDO which I believe is much more important. The Zigbee Device Object plays a large role in the actual device, initializing everything, interfacing to the NWK and APS layers, doing device discovery, and basically managing everything. When two devices communicate, it's the ZDO that initially sets up the connection between them. So in that sense, I believe that the ZDO is much more important. Finishing it will bring me much closer to being able to release the stack.

The initial release might not include binding and grouping, but I figure that it should be okay. They can be implemented at a later time when I can actually see the use cases, instead of trying to guess at how they will be used by others.

Until then, gonna enjoy some nice Bay Area salami (Molinari) and check out all the weirdos in Berkeley. Unfortunately, I seem to fit in quite well here... 

Ahhh...the California sun. It's been about six months since the last time I was back and it's wonderful. I arrived last night on July 4th and ended up going to an Independence Day concert with the local symphony. I think I'm only able to appreciate Independence Day in the US now that I'm living outside the country. When I was living here, I always kind of took it for granted and was kind of apathetic about everything.

 Well, I didn't get much done before I came here. I was actually experimenting with trying to implement dynamic ram allocation inside the stack. Normally, it's a bit dangerous to use malloc and free on an embedded system on data structures that you use a lot due to the eventual fragmentation of the memory. However statically allocating all of the structures required by Zigbee is grossly inefficient because there are so many tables that are used and currently, I need to define the max table entries for each one. 

It would be much better to dynamically allocate the table entry from a pre-specified memory heap that can be shared among all the tables. That way, you can still control the amount of memory that you are using and you would have a much better utilization of that memory.

I discovered the managed memory code library in the Contiki OS (mmem.c/h) and thought hmmm...maybe I can use this to do dynamic allocation. So I spent a day writing a dummy program and test routines that emulate my stack to see how the managed library would work. The managed memory library basically allocates memory from a pre-specified memory array, and after the memory is freed, will compact the whole memory downwards so that the freed memory won't leave a gap (fragment) in the memory array. In this way, it prevents memory fragmentation which is the main issue with malloc and free.

However I discovered a problem with using the managed memory with linked lists. Most of my tables are currently implemented using linked lists because it makes insertion, deletion, and searching much easier. But when a table entry is freed, the managed memory will compact the memory which can change the addresses of the other table entries. But the linked list pointers don't get updated. This is a severe problem so unfortunately, I wasn't able to use the managed mem lib for the stack. I was pretty disappointed about that because I think I might have been able to decrease my RAM usage by half and still remain safe if I had a means to dynamically allocate memory. 

Anyways, I'll probably continue to work on the stack while I'm here. Now that I've gotten used to working on it all the time, it feels weird if I go more than a few days without touching it. In the meantime, I need to work on my tan... 

Just did a sizing of the stack again. The major portions of the MAC, NWK, and APS layers are about 70% finished, although testing and debugging is still ongoing. I rewrote a lot of the stack to make it cleaner and more maintainable. My past experience is that once code gets out in the field, it can quickly turn to spaghetti due to patches and quick fixes. So if the code is more straightforward, hopefully this can be minimized.

Now for the numbers, the code size is currently ~29k. That's not including the standard libraries, although I'm only using simple functions like memcpy and memcmp. I'm still using gcc for x86 so those numbers will probably not reflect the actual size. I'd figure the code will be quite a bit larger when I compile for a RISC architecture like the AVR. I'm pretty happy with that number, though. There's decent headroom there so that I think I can mak my 60 kB target size.

For the RAM size, I'm currently using 5.6k. That number is a bit high for me and reflects the fact that I've implemented a lot of tables and queues lately, ie: routing, discovery, indirect, aps_retry, mac_retry, etc...  And of course the biggest consumer of RAM is the buffer pool (10 buffers = 1.4k). The RAM size should pretty accurately reflect the actual size in the target since it usually is architecture independent. Once the stack is stabilized, I can start to optimize the RAM usage. Some of the ways to reduce the RAM would be like dynamically allocating memory for some of the linked lists instead of statically allocating it, shrinking the structures, shrinking the buffer pool, and tweaking the number of entries for each table.

Anyways, the numbers turned out better than I expected so it looks like there's hope to have a fairly tight stack. It's still gonna be a bit longer though since I need to implement some of the lesser used functions in each layer and I'm still cleaning things up. I'll try to keep everyone posted as a first release approaches. I hope it comes soon, cuz I'm getting tired...

I finally got the Zigbee broadcasts working. It's a slight deviation from my original plan to get the mesh routing working, but as I was going through the mesh code, I realized that it requires the use of broadcast transmissions. So I figured why not get broadcasts up first.

I ran into numerous problems with broadcasts. Even though I tested it in my original test fixture, it was only capable of testing two devices which was almost a trivial case. When I tested it out in the simulator with a multi-node network, I ran into infinite broadcast loops that kept on crashing my stack. So many data transmissions were flying around that it would immediately exhaust the buffer pools in all the nodes and they would end up hanging. Hmmm...need to do something about that too. I should be able to recover gracefully if I exhaust my buffers.

Anyways, I almost had to rewrite the broadcast handling code to get it to work. I didn't take into consideration a couple of different situations that I should have thought of. I also found a lot of repetitive code that didn't need to be there so I cut all of that out. Finally, I got things to work and tested it on a five node network with the topology I mentioned previously . I can go home and drink a beer. I've been working out of hamburger shop on a Saturday night. Not the best way to spend a weekend. I'm sick of the smell of burgers and fries. If you're ever at a Mos Burger in Japan, you should try out the rice burgers, though. They're pretty interesting.

This week has been pretty busy with the part-time job I'm doing for survival money. The part time job is nothing too difficult. I'm basically just doing customer support and technical sales for the company's product line which consists mostly of LAN, WAN, and some encryption chips. I'm contracted for two days a week which is just enough to pay rent and bills. That means that I can probably survive about two years working on this project at my current level of spending. Another way to look at it is that I'll probably die of mental fatigue before my savings gets fully depleted. Ha ha ha.

In between the customer visits this week, I managed to squeeze in some time to continue the test efforts. I finally was able to start testing the data transfers. I got the basic network management functions working over the weekend in my simulator so that I could form a network and also add nodes to it.

Today, I got the single hop unicast transfers working. That's a bit of a milestone for me because to transfer data, I need to send it out from the application layer. From there, it travels down to the application sub-layer, networking, mac, and finally out the driver to the sim. From the sim, it travels to all the nodes who check the destination address and drop the frame if it's not meant for them. At the target node, it will travel all the way up the stack (driver, mac, nwk, aps) until it reaches the application layer where it gets printed out to the simulation console. All the way up and down the stack, there are many things that could have gone wrong, however it was surprisingly easy to get working. I guess that's because I did a lot of testing in my old test fixture for the data transfers.

Man, I just got off a monster coding session today. The amount of time spent coding totaled about 10 hours. That's 10 hours of pure talking to random strangers, no useless meetings, no phone calls (nobody calls me anyways), no internet surfing. My head feels like its about to explode.

I felt it was necessary so that I could move forward again. Things have been going kind of slow lately as I've been concentrating on getting the simulator up and running. I had thought I was up and running originally, but ran into an issue where multiple processes printing to the same console made it impossible to debug. So I've been busting my ass these past three days modifying the simulator to open each node in its own window, and also making the communications between the nodes and the simulator multi-threaded.

Well, today, I finally got it up and running.

The simulator application has turned into a nice little tool. It's now a separate application from the protocol stack, and it only consists of three files. I made a lot of changes to the innards of it which simplified the code and allowed me to remove a lot of things. It's compiled separately from the node software so that you can build your node stack almost the same as you would build it for hardware. The only difference is that the simulator expects two bidirectional pipes for communications to the node, one for the radio ether and one for sending/receiving commands to the node. These are just created in the main function so it's not too bad.

I like the fact that the sim is separated out from the stack code now because it can be used for other protocol stacks as well. It just calls an executable and opens it in its own window so it doesn't care if the executable is Zigbee, 6LoWPAN, or some Joe Blow proprietary stack. The communications uses "named pipes" instead of unnamed ones. The difference is that named pipes are files from the program's point of view which is why two processes running different code can access and communicate with them. As long as your programs know the names of the files (pipes), then you can open them and send data through. I use a standard naming system based on the process' PID so that both sides always know the correct pipes to communicate with.

Anyhoo, things are working again and I am already simulating multi-node networks. I'm finding a lot of issues with my network join logic, especially dealing with more than two nodes. It makes me think that the time I've spent on the sim might actually be worth it. It might even be useful for other people who want a simple way to simulate a multi-node network.

That's it for me. I'm pooped!

A bizarre thing happened to me this morning. I was going to the international ATM near where I lived to get money to pay my rent. My bank is an American one so I can't use the standard Japanese ATMs and that was the only machine within cycling distance from my apartment. The international ATM is located in the basement of a department store, and the department store spans about 11 floors.

As I was walking towards the ATM, I noticed a lot of people crowded in the area in front of it. As I approached closer, I was shocked to see somebody lying face down on the tile floor. Somebody had committed suicide by jumping from one of the upper floors and I had to pass by their dead body to get to the ATM. What would drive someone to such despair that they would waste their life like that and why did they have to do it in front of the only international ATM in the area? It was such a horrible experience that it put me in a bad mood the whole day. Japan has one of the highest suicide rates in the world, but its one thing to read about it in the paper and another to experience it right in front of you.

It was not the best way to start the day.

Anyways, I've been putting a lot of hours into the stack these past couple of days. The simulator is up and wheezing along so I've been able to run some basic tests with two nodes.

Damn. I just checked my watch and today is Friday. My last post, I mentioned that it was Friday night so I would crack open a bottle of wine and get drunk watching anime. Unfortunately, it was Thursday night. After leaving the job, the days just kind of blend into each other. My routine is basically wake up, miscellaneous, work on the project, miscellaneous, and go to sleep. I even lost my concept of weekends.

Don't get me wrong. I'm not a workaholic. I'm actually pretty lazy, but I'm just struggling to move the project forward right now. Apparently so much so that I've lost my concept of time. Now if I can work hard enough to go backwards in time ten years, it'd be great.

On the upside, life outside of the corporate world is excellent. Aside from the lack of insurance and the worrying about how long my savings can hold out, I must say that I've never been happier. I'm writing this entry on the veranda of a cafe overlooking a park right now. Its noon, the weather's beautiful (a rarity in Tokyo), and there's a lunchtime orchestra concert going on over there, injecting a little culture into this ghetto engineer's life. Also, I can see a bunch of suits watching the orchestra but getting ready to leave since their lunch break is over. Ha ha ha. Oh, the price of an eternal lunch break. 

And the best part is, my crappy little hacked simulator is showing some signs of life. Ah, the little things that keep engineers and programmers happy...

Have a nice weekend (this time it's for real)! 

Apologies for the radio silence recently. I've been obsessing over how to simulate the stack lately and its consuming more and more of my time. I've decided to put Cooja on hold for now. I think it would be more useful once my stack is more stable so that I can do some higher level testing like energy estimation. However at this point I need something that can just set up a network and see if the nodes can communicate with each other successfully.

I also tried Contiki's Netsim but I had problems getting it to work. I suspect it was the way I have WPCAP (Windows Packet Capture) installed or configured. Netsim uses uIP as well as Rime to simulate the network which I believe is why WPCAP is used.

Anyways, I think for my needs, this is also a bit more complicated than I want. I will probably be doing a lot of debugging so I figure that my simple brain would get confused if I have to wade through too many layers to figure out problems.

As each day passes, I'm getting more and more irritated with myself for not being able to move the project forward. It's like hitting a plateau and losing all of your momentum. It's been about three weeks that I've been trying to get the code simulated in a network scenario and now I'm getting frustrated at my inability to do it. So to try and force the issue, I decided that I would write a crappy simulator with a command line interface. I spent the last two days sketching it out and coding up a simple prototype. There's no beautiful GUI, no eye candy…it just supports a few commands, but it looks like it will be able to do what I need.

I only need a few things right now. It needs to be able to spawn nodes as independent processes, allow nodes to transmit and receive into a shared radio medium, and have a command line that allows me to send commands to individual nodes to force them to do something (like start a network or send some data).

I'm currently using Linux "unnamed pipes" for the communication between each node process. Each node will have tx, rx, and cmd pipes connected to a main process. The main process will handle the command line shell and parsing, spawn nodes, make sure that data transmitted by a node goes out to all the other nodes, and send out commands. The initial prototype looks good, so I'm now starting to integrate the project into it.

I'm hoping that one of these days, I'll be able to get a simulator working with the stack so that I can go back and try to finish it.

Hmmm…I actually feel better now that I aired out some of my frustrations. Anyways, its Friday night. I'm gonna pop open a bottle of wine and get drunk watching anime. Hope y'all have a good weekend!
So I've been playing around with Cooja lately and got the stack to compile on it. I also got it to trigger my processes which basically means that the stack is functional. But in getting to know Cooja better, I'm also seeing some of the drawbacks of using it...