I am still waiting on the Mega modules, ordered last week. Not sure what the postage hold up is. In the meantime, I managed to get the Galileo Gen1 to work!! So many bugs and wrong turns on this device.
●The firmware update instructions are very unclear…the device must have NO sd card inserted and then booted up with the usb port plugged in. It then comes up as a serial.port for the firmware update to work.
●To my surprise, i ran the latest IOT image, the same one used on the Gen2 board. It worked…with bugs..
●SPI functionality via MRAA is non existent on the Gen1. Even with the lastest 7.3 mraa moduke. The clock signal on my scope was garbage! Very disapointing
●SPI via arduino sketches is also flaky, at a setting of 8mhz i was only able to get around 2-3mhz clock before it skewed into noise
On a whim i ended up trying the pi-spi nodejs module and it worked at 3mhz!!! But when i rebooted i got noise on the spi pins…hours later i realised that if I ran an arduino sketch that uses spi before hand, spi was usable again…so i wrote an arduino sketch that initialised the spi and did nothing else…at boot time the sketch starts and about 10 seconds later the xdk daemon launches the main.js script…and bam!!!
Today I have completed the initial set up of my RPi A+. Wow that took a while. I needed a specific set of tools ready on the device:
Network connected via WiFi
Basic Apache install to host the Client interface page
Basic FTP to simplify uploading code and files
Node JS to run the Wow Suit software
Node-SPI module to enable NodeJS to use the SPI port on the device
Installing and configuring Node and the SPI modules was the MOST difficult part. I was initially planning on using the MRAA module, but it has so many prerequisites and build failures that i bailed on it. I may go back to it if the current node module I’m using does not suffice.
This post was by the far the only advise that worked on the A+ board. I initially attempted to compile the latest version of NodeJS directly but the board kept crashing mid way. So I’m copying the instructions here for anyone else and for future reference for myself:
sudo apt-get update
sudo apt-get upgrade
Download the latest node build
Install the package using dpkg
sudo dpkg -i node_latest_armhf.deb
Test Node and NPM
At this point Node works perfectly as well as npm. Now on to the next issue. SPI!
This is the “pi-spi” nodejs module. It is the ONLY one that would compile and run. I have not yet tested that it works as I’m at the office, but its good to know that it compiles and runs!
Rendering a Line in a 2D bitmap…who would have thought that was hard? Here is some code I hacked up from the Rosetta archive lineTest
Spreadsheets are super handy for this kind of work. Its a kind of Meta pixel mapping and It saves me a TON of time. I have so far created a full class for the Wow Suits LED matrix. It includes 3 major blocks of data
pixelGrid: An array containing the pixel map. This is a 40 x 64 item array of 3 byte blocks (7,600 bytes). All rendering and processing is done directly to this block of data. its the bitmap representation of the suits LEDs
LEDs: An array containing n (number of LEDs in the suit) number of bytes x 4, 1 start frame of 4 bytes, set to 0, and 1 end frame of 4 bytes, set to 255. Each LED block has a start byte, which contains brightness data for that LED, and 3 bytes each representing Red, Green & Blue. This is the data block that is sent out to the suit via SPI (7,688 Bytes)
rowMap: An array containing a map of physical addresses of each LED relative to its position on the suit. This is a 40 x 64 item array of 1 short Integer (5,120 bytes). This is used to find out the physical ID of an LED in the real LED data block, LEDs that is sent out via SPI to the suit
I have created a full set of functions to set up the data structures, render horizontal and vertical lines of any given length, and a smooth dimmer function for frame blending.
On another note, I have received the MIC IN audio modules I was waiting on from eBay…
I really hope to utilize this and make the suit and masks audio responsive.
I used the MRAA library to get a pretty good rate out of the GEN2 Galileo. Above is 8 Bits being latched via the clock pin. I made a VERY hacky bit-bang function that pushes out data to the WS2803. And success.
Below is a video of a fully working shiftOut() function written in JS for the Galileo GEN2, driving the suit
Ta Da! Im running a simple test program I developed to test a single WS2803 chip and its working as intended .
Now onto integrating the HTTP server via NodeJS and interactivity here we come
I accidentally purchased a GEN 2 Board 2 weeks back. It was advertised on ebay as an un-used but packaged opened GEN1 Board. So i bought it as a backup for my current project and to test some more razor edge stuff, so I wouldn’t blow my live one. This came to me as both a good and bad surprise. I actually wanted to get a GEN 2 board because Intel improved all the things i hated about the GEN 1 board. But i also needed a quick swap in backup for my project….and boy did Intel do a number on us all!
Things I really like:
Networking “Just Works” even the WIFI module was simple as plug in and boot!
They boot up system was unchanged and it booted up just as fast as the GEN 1
Same form factor and size as the GEN1
Packaging came with the power supply….
NodeJS works natively…with some tinkering
There is a pretty good IDE called the Intel XDK IOT edition which lets you create NodeJS projects, upload, debug etc…
Things I REALLY hated:
Power….the main power supply has changed from 5v to 7.5-12v! WHY??? This makes portable devices so much more annoying to power.
The GEN1 serial port, which was strange on its own, has been replaced by an even stranger 6 pin FTDI connector. You need to buy your own $15 serial to USB module and a special one, as no other standard ones work. You need to get the TTL-232R-3V3 . This is super annoying. They took out 1 chip from the board, and made the user pay for it externally
I’m yet to test out how the pins speed work in Node JS. If i can create working WS2803 Driver in Node, I will be super happy!
I had some luck getting a simple web server to accept POST data and action it by outputting a PWM signal to the pins..This was quick! which I’m happy with. This is one of the main reasons i would like to ditch the Arduino code. The HTTP server instance is too slow to be usable in a real time input system
The basic orientation input system for the CheapVR system is pretty much ready. Im using a combination of UDP and web-socket. Originally, I had the local server running a simple UDP server that received the orientation data from the phone, and then re-transmit it via HTTP POST queries from the browser. However this method had allot of latency.
Currently the Browser connects via a web-socket to the server and continually polls it for orientation data.
I have started working on a small VR web-app. I think 3D headsets will only get better and better as tech progresses and people start to get into it. I have owned the 1st gen Sony HMD unit HMZ-T1
This thing is great, but it lacks any kind of tracking like the Oculus Rift and the new Google Cardboard(which really got me thinking about this whole thing again). I have in the past used old Android phones to get tracking data. There is a great little app called Sensor UDP it lets you send UDP messages to a pc/tablet/phone etc.. with the sensor data from any android phone! This has been done before, Im working on a different angle!
A phone will be stuck to the HMD unit and track your head movements. It will then send this data via WIFI to the PC connected to the HMD via HDMI. I can then make really cool 3D web apps in Chrome or any HTML5 enabled browser.
I’m testing out some stuff, and have created this nifty little app. When you touch a blob it grows(and at the same time makes a charge-up sound). The finger you touch with, has its PWM value increased up to maximum until the finger has been touching the blob for a set timeout. Then the blob is deleted, a pew sound is played, and 2 new blobs are created at random spots close to where the original was. If you hold all 5 fingers over a set of blobs or one large blob, it/they are deleted and Pew..pew…..pew……pew echo is played…
I have abandoned rendering any HTML5 Canvas graphics in order to test out performance and latency.. I have plugged all my methods directly into CSS spans and using their styles have pretty much eradicated all the latency issues I was having…On to better places!