Ledumont

28/07/2017 Gesture in time – Sensor Lab Internship 2 publier dans: Électronique

The main goal of this internship was to learn C++ and use the openFrameworks Library. The way I see it the entire internship ended up being split in three different sections. For the first part, I followed an online class and had some basic exercises and guidance from the lab coordinators ranging from data structure to basic data types like a bag, a queue and a stack. Once this part was completed, I was pointed towards more complex problems to solve, these included some classic computer science assignments like making my own linked list and doubly linked list class. After those were done, I moved onto storing data in a hash table using my own hashing function. This enabled me to use and experiment some concepts available in the C++ language such as pointers and dynamic memory, which were both new to me.  The last part consisted of building a project of my own. I was able to stay pretty close to what I had initially proposed. All of this while giving support in the lab, whether it was building some of the demo circuits or trying to help some students here and there. This was a nice add-on as I got to revisit some core concepts and expand on some previously learned circuits and skills.

 

In order to start, I decided to follow an online class on SoloLearn in combination with following the tutorials and references from the cplusplus website. I think doing both was good because I got to test my learnings in different manners and it made me revise concepts more than once. Every time I finished a section on SoloLearn, I did a quiz which helped me target where I needed to revisit or focus on. By using both of those resources, I was able to grasp most of the new concepts and identify where I was going to need more help. Whenever I had specific questions or concerns, I was able to get the key information from the lab supervisors. Once I had a good enough base, I was guided towards using structs and playing with different data types. By doing this I was directly using some of the new concepts I had just assimilated and learning how to store and parse data in different manners. This came in really handy in the completion of my project because I had to store and analyze various amounts of data. In order to proceed, I had to make my own linked list and doubly linked list class. Linked lists are an interesting way to store data since they are not predefined like an array. An array as indexing and a defined length which means that when you create an array you define a space in memory where all its items will be stored. This has obvious advantages like indexing which lets you query one item directly. On the other hand, linked lists take use of the pointers available in C++. Linked list items are stored at different locations in memory and are pointing to the item before them and, in the case of a doubly linked list, after them. This allows them to be as long as you need them to be and of undefined length when initialized. In order to access an element, you need to start from the head and make your way down.

 

Since I had my own linked list classes, I was guided towards hash tables and hashing functions. A hash table uses the best of both the array and the linked list. In sum, it consist of an array of linked lists. This allows you to have undetermined lengths of items and have access to the items using random access memory which makes it fast to retrieve the data. In order to determine the location to add or retrieve an item, you use a hashing function. This function will always return the same value based on the element it is given. This value is used as an index for the array. If there is already an item in this location, the new one is added to the linked list at this index.  This way of storing memory was new to me but highly interesting as it’s a real blend of both methods and showed a good use of using pointers.

 

At this point in the semester, it was time to try and make a project using C++ and the openFrameworks library. The main guiding line behind my project was to represent Gestures in time. We always use touch enabled devices from light switches to cell phones where the gesture we make ends at the moment of release. I was interested to use the gesture as the starting point or set of rules for the life of a particle system. A particle system that would then animate the lighting display of a chandelier-like structure. I decided to establish a few basic movements that I wanted to capture and create a vocabulary with.

Implosion

 Move1_03

Explosion

 Move1_05

Rotation

 Move1_06

Alignment

 Move1_07

Acceleration

 Move1_08

Deceleration

 Move1_09

 

Once those words were defined, I tried to make a pseudocode to define how I would treat the data and explained my intentions to the supervisors to see if I had a logical approach. Code-wise I was able to recognize all the predefined gestures which was a significant milestone for me. From there, I just had to interpret them in light animation. This was done using a particle system to which I would apply different forces based on what was found in the gesture. The particles are animated following the rules given by the gesture and they are then mapped to an LED matrix which is dressed as a chandelier for this project.

The end result ended up being a small table I built, which had embedded lighting and a camera, enabling me to do some computervision analysis. I initially wanted to use the multi touch screen but I decided to make my own touch screen using computervision for a few reasons. By making my own table, I was able to define my form factor. I wanted to have a certain control over this because I could have a better idea of what movement or gesture to anticipate while also limiting the possible number of fingers on the surface. Due to its smaller size and edge, it is harder for the user to put 10 fingers on at once or to collaborate with another user. Although this could be interesting for another project, I preferred to have a one on one interaction. Another problem I had encountered was addressed by this change in surface: it removed any need for visual content on the screen. This was initially good news for me, as I wouldn’t have to design and display anything other than the actual output, my hypothesis being that it would be hard to display something that would give guidance to the user while staying in the aesthetic of the whole project. Looking back, this was a mistake, as interacting with this lighting device without the visual output I had on the computer was very hard. This could potentially be addressed by having more definition in the LED Matrix or by having the particles reinitialize or be more responsive on every new gesture. That being said, there was another reason I wanted to do this gesture recognition library using blob tracking from computer vision, it was because I could apply it in other contexts. I could now use my library or expand on it in a context other than hand gestures, whether it be motions in a crowd or a dance performance. I think this was the main argument for me, making something more versatile. Working with OpenCV add-on for openFrameworks was good for blob detection using a background subtraction technique, a blob could be defined as a given zone of interest in an image. Sorting and tracking the blobs is done in this library by giving an identification based on the size of the blob. This can be good for some applications but was not optimal for me. The problem being that in a gesture the size of contact of the fingers would vary and adding or removing a finger would change all the IDs. This is why I decided to build a layer on top of the blob detection which would compare position and ID the blobs based on previous positions. This turned out to probably be the most useful piece of code in the project. Using this enabled me to build all of the rest of the gesture analysis library.

 

For the visual output of the project, I decided to use adafruit’s neopixel LED strips and the FadeCandy to address them. The FadeCandy was great as I was able to address my light pixels directly through openFrameworks. I could make my visual, transform it in an effect and then the FadeCandy addressed the pixels. This meant I didn’t have to use an LED driver and code my own arduino code that would control the LEDs, which I would have most likely addressed via serial communication. The FadeCandy would address all of this using the open pixel protocol. In the end, this was a great decision as it worked well and saved me a lot of time. The library had some minor indexing errors but I was able to fix them on my own.

As for the aesthetic of said chandelier, I decided to use plexiglass to carry and diffuse the light. This would save me from about 215 solder joints and the possibility of bad connections, and really echoed back to the original aesthetic of a chandelier. The overall effect was great but I would need to spend a lot more time on the design and confection to achieve the look I would like to have. I consider this a good start or prototype to build from.

 

In future explorations I would like to make the design of the chandelier a little cleaner and have various sized plexiglass diffusers to give it a more interesting shape and look. I would also like to move away from the chandelier and have it in a very large format where people could walk around the lights and be immersed in the movements. Maybe even acting a bit like various strobe lights playing with the vision and perspective of passerby. You could then have a central table where to control the lighting motion and people walking around it. Ideally keeping the wiring exposed and using it as part of the design, every cable leaving from the central brain, the table. Code wise, I really want to make my blob tracking class more robust and maybe add it to the openCV library to contribute back to the openFrameworks community. Now that the semester is done I can go over the whole thing and optimize and make the code more efficient and clean. I would probably need some kind of feedback or guidance for the gesture which is also another thing I want to explore deeper. I don’t want to have to display something on the screen as I still beleive it breaks the aesthetic of the installation. I will probably start by playing with the rules of the particle system and their format. In other words, there is still plenty of places to expand from whith this research.

 

Overall this internship has been an amazing experience for me. Looking back, I can definitely see all the progress and learnings that were made. Starting from a basic online class, from which I received a PDF graduate certificate, moving to some more complex data parsing assignments, to finally building my own project from scratch. A project for which I built a hand gesture library, a blob tracking class, a particle system and was able to connect all of this to a physical output. Being in the sensor lab and Computation lab most of the time I worked on this was ideal as I was able to get some guidance and feedback from both lab coordinators. I wouldn’t have been able to get this far if it wasn’t for this collaboration and help. Also, being in the lab I was able to build the demo circuits and sometimes help students which was highly beneficial for me. I could go back on things I had learned previously and expand on some other skills when trying to help them with their projects.

Useful links

 

 

Images

Thierry Dumont                  14/12/2016

Internship Report            

 

 

The main goal of this internship was to learn C++ and use the openFrameworks Library. The way I see it the entire internship ended up being split in three different sections. For the first part, I followed an online class and had some basic exercises and guidance from the lab coordinators ranging from data structure to basic data types like a bag, a queue and a stack. Once this part was completed, I was pointed towards more complex problems to solve, these included some classic computer science assignments like making my own linked list and doubly linked list class. After those were done, I moved onto storing data in a hash table using my own hashing function. This enabled me to use and experiment some concepts available in the C++ language such as pointers and dynamic memory, which were both new to me.  The last part consisted of building a project of my own. I was able to stay pretty close to what I had initially proposed. All of this while giving support in the lab, whether it was building some of the demo circuits or trying to help some students here and there. This was a nice add-on as I got to revisit some core concepts and expand on some previously learned circuits and skills.

 

In order to start, I decided to follow an online class on SoloLearn in combination with following the tutorials and references from the cplusplus website. I think doing both was good because I got to test my learnings in different manners and it made me revise concepts more than once. Every time I finished a section on SoloLearn, I did a quiz which helped me target where I needed to revisit or focus on. By using both of those resources, I was able to grasp most of the new concepts and identify where I was going to need more help. Whenever I had specific questions or concerns, I was able to get the key information from the lab supervisors. Once I had a good enough base, I was guided towards using structs and playing with different data types. By doing this I was directly using some of the new concepts I had just assimilated and learning how to store and parse data in different manners. This came in really handy in the completion of my project because I had to store and analyze various amounts of data. In order to proceed, I had to make my own linked list and doubly linked list class. Linked lists are an interesting way to store data since they are not predefined like an array. An array as indexing and a defined length which means that when you create an array you define a space in memory where all its items will be stored. This has obvious advantages like indexing which lets you query one item directly. On the other hand, linked lists take use of the pointers available in C++. Linked list items are stored at different locations in memory and are pointing to the item before them and, in the case of a doubly linked list, after them. This allows them to be as long as you need them to be and of undefined length when initialized. In order to access an element, you need to start from the head and make your way down.

 

Since I had my own linked list classes, I was guided towards hash tables and hashing functions. A hash table uses the best of both the array and the linked list. In sum, it consist of an array of linked lists. This allows you to have undetermined lengths of items and have access to the items using random access memory which makes it fast to retrieve the data. In order to determine the location to add or retrieve an item, you use a hashing function. This function will always return the same value based on the element it is given. This value is used as an index for the array. If there is already an item in this location, the new one is added to the linked list at this index.  This way of storing memory was new to me but highly interesting as it’s a real blend of both methods and showed a good use of using pointers.

 

At this point in the semester, it was time to try and make a project using C++ and the openFrameworks library. The main guiding line behind my project was to represent Gestures in time. We always use touch enabled devices from light switches to cell phones where the gesture we make ends at the moment of release. I was interested to use the gesture as the starting point or set of rules for the life of a particle system. A particle system that would then animate the lighting display of a chandelier-like structure. I decided to establish a few basic movements that I wanted to capture and create a vocabulary with.

 

 

Implosion

Explosion

Rotation

Alignment

Acceleration

Deceleration

 

Once those words were defined, I tried to make a pseudocode to define how I would treat the data and explained my intentions to the supervisors to see if I had a logical approach. Code-wise I was able to recognize all the predefined gestures which was a significant milestone for me. From there, I just had to interpret them in light animation. This was done using a particle system to which I would apply different forces based on what was found in the gesture. The particles are animated following the rules given by the gesture and they are then mapped to an LED matrix which is dressed as a chandelier for this project.

 

 

The end result ended up being a small table I built, which had embedded lighting and a camera, enabling me to do some computervision analysis. I initially wanted to use the multi touch screen but I decided to make my own touch screen using computervision for a few reasons. By making my own table, I was able to define my form factor. I wanted to have a certain control over this because I could have a better idea of what movement or gesture to anticipate while also limiting the possible number of fingers on the surface. Due to its smaller size and edge, it is harder for the user to put 10 fingers on at once or to collaborate with another user. Although this could be interesting for another project, I preferred to have a one on one interaction. Another problem I had encountered was addressed by this change in surface: it removed any need for visual content on the screen. This was initially good news for me, as I wouldn’t have to design and display anything other than the actual output, my hypothesis being that it would be hard to display something that would give guidance to the user while staying in the aesthetic of the whole project. Looking back, this was a mistake, as interacting with this lighting device without the visual output I had on the computer was very hard. This could potentially be addressed by having more definition in the LED Matrix or by having the particles reinitialize or be more responsive on every new gesture. That being said, there was another reason I wanted to do this gesture recognition library using blob tracking from computer vision, it was because I could apply it in other contexts. I could now use my library or expand on it in a context other than hand gestures, whether it be motions in a crowd or a dance performance. I think this was the main argument for me, making something more versatile. Working with OpenCV add-on for openFrameworks was good for blob detection using a background subtraction technique, a blob could be defined as a given zone of interest in an image. Sorting and tracking the blobs is done in this library by giving an identification based on the size of the blob. This can be good for some applications but was not optimal for me. The problem being that in a gesture the size of contact of the fingers would vary and adding or removing a finger would change all the IDs. This is why I decided to build a layer on top of the blob detection which would compare position and ID the blobs based on previous positions. This turned out to probably be the most useful piece of code in the project. Using this enabled me to build all of the rest of the gesture analysis library.

 

For the visual output of the project, I decided to use adafruit’s neopixel LED strips and the FadeCandy to address them. The FadeCandy was great as I was able to address my light pixels directly through openFrameworks. I could make my visual, transform it in an effect and then the FadeCandy addressed the pixels. This meant I didn’t have to use an LED driver and code my own arduino code that would control the LEDs, which I would have most likely addressed via serial communication. The FadeCandy would address all of this using the open pixel protocol. In the end, this was a great decision as it worked well and saved me a lot of time. The library had some minor indexing errors but I was able to fix them on my own.

As for the aesthetic of said chandelier, I decided to use plexiglass to carry and diffuse the light. This would save me from about 215 solder joints and the possibility of bad connections, and really echoed back to the original aesthetic of a chandelier. The overall effect was great but I would need to spend a lot more time on the design and confection to achieve the look I would like to have. I consider this a good start or prototype to build from.

 

In future explorations I would like to make the design of the chandelier a little cleaner and have various sized plexiglass diffusers to give it a more interesting shape and look. I would also like to move away from the chandelier and have it in a very large format where people could walk around the lights and be immersed in the movements. Maybe even acting a bit like various strobe lights playing with the vision and perspective of passerby. You could then have a central table where to control the lighting motion and people walking around it. Ideally keeping the wiring exposed and using it as part of the design, every cable leaving from the central brain, the table. Code wise, I really want to make my blob tracking class more robust and maybe add it to the openCV library to contribute back to the openFrameworks community. Now that the semester is done I can go over the whole thing and optimize and make the code more efficient and clean. I would probably need some kind of feedback or guidance for the gesture which is also another thing I want to explore deeper. I don’t want to have to display something on the screen as I still beleive it breaks the aesthetic of the installation. I will probably start by playing with the rules of the particle system and their format. In other words, there is still plenty of places to expand from whith this research.

 

Overall this internship has been an amazing experience for me. Looking back, I can definitely see all the progress and learnings that were made. Starting from a basic online class, from which I received a PDF graduate certificate, moving to some more complex data parsing assignments, to finally building my own project from scratch. A project for which I built a hand gesture library, a blob tracking class, a particle system and was able to connect all of this to a physical output. Being in the sensor lab and Computation lab most of the time I worked on this was ideal as I was able to get some guidance and feedback from both lab coordinators. I wouldn’t have been able to get this far if it wasn’t for this collaboration and help. Also, being in the lab I was able to build the demo circuits and sometimes help students which was highly beneficial for me. I could go back on things I had learned previously and expand on some other skills when trying to help them with their projects.

 

 

Useful links

https://www.sololearn.com/Course/CPlusPlus/

http://www.cplusplus.com/

https://github.com/scanlime/fadecandy

https://www.adafruit.com/product/1689

http://blogs.wcode.org/2014/10/fadecandy-neopixels-and-ofxopc/

https://github.com/DHaylock/ofxOPC

http://openframeworks.cc/documentation/ofxOpenCv/

http://openframeworks.cc/learning/#ofBook

 

Images