camerafree mobile motion capture as input, smartphone as processor, seetrough dataglasses as output


camerafree mobile motion capture as input, smartphone as processor, seetrough dataglasses as output

a proposal for a next generation mobile computing environment

is anyone working on using motes/tinyOs for a non-optical human body motion capture system ? replacing keyboard, mouse, touchscreen with many in a second updating position signals from hundreds of senders attached all over the human body

the idea: to use many hundreds of motes or active rfid tags … worked in textile stripes and textiles … ribbons … shirts… trousers… gloves … rings …. bindi style points … what update many times within every second their position signals ….

the user would see his/her whole body being thisway camerafree motion captured on lightweight plastic semittransparent dataglasses …

processor unit(s) would be one or several 1ghz plus smarphones

perhaps this would not even need active rfid tags … i was just reading the specs of some passive rfid tags

http://www.anewtech.com.my/datasheet/industrial-rfid/Tags-HF-UHF.pdf

UHF Tags CHaRaCTERIsTICs
EPC-C1g2 ICs NXP UCODE G2XM
Products codes UHF-g2-250 UHF-g2-250HT
Operating Frequency 860 – 960 MHz
Memory size and type 512 Bits – EEPROM
Data retention 10 years
write endurance 10 000 cycles
RF Data Transfer Rate 40-640 Kbit/s
Temperature Range -55° C to +125° C -55° C to 240° C
Degree of protection IP68 (See Note) IP68 (See Note)
Materials Plastic ABS for standard models and PPS resin for HT models
weight 17.8 g 25.7 g
Mechanical dimensions 51x51x6.5mm (± 0.5mm)

but actually i think active rfid tags would be better or motes … because of the decentralized sending activities

has anybody thought about something like this ?

here is a writeup i sent allready out to some people, so far with only a limited feedback … am a bit puzzled why …. i think the idea is marvellous

it is about a next generation truly mobile computing environment …. like kinect for smartphones, but not camera based, instead based on several stripes of lightweight material sending several position status updates per second …. but let me start from beginning … the central part of the system would be a state of the art smartphone … 1ghz plus processor …using a yet to be develloped kind of dataglasses as output. ideally this dataglasses would be plastic and transparent, so the user can decide how much percentage of real world sight gets mixed with how much percentage of data display… the two little displays would be kind of electronic ink next generation capable of showing color video material.
http://www.pixelqi.com/ might be a possible source for such tiny displays.
now i come to the third most important part of the system, the alternative input being generated trough a series of lightweight plastic textiles stripes, having rfid/bluetooth/infrared/gps/? signal senders within … this stripes are being worn around the fingers, ellbows, knees …one could also imagine a shirt and trousers having hundreds of senders integrated plus gloves … just recently, there is a mayor effort in these direction being accomplished by jeff rowberg with his keyglove waerable input device.

to go deeper into this third part of the system … the many signals per sending position updates sending stripes/textiles/clothes … the reason for this is to get a truly mobile motion capture without camera … what signifies a great freedom of movement … together with the semitransparent plasticy dataglasses as output … this system would allow someone to dance on a lawn and play a software piano with hands and legs moving freely … lie on the grass and watch 50 percent blue sky and 50 percent an selfcreated video what one edits with left right up down circular swiping pinching etc. movements in the air …. a modern day poet could stand near a tree and rest the hands on its bark and when inspiration strikes… write the poem directly on its bark … or one could sit in a rowing boat on a lake and paint the water scenery either on the boats wooden seating or in the air … limitless possibilites …. but most of all… and that is what motivates me the most to write to many persons about this idea … the children will not be crippling themselves with hours of game console controller holding and neck bent, spine deforming unhealty sitting positions … but they could go outside and play in upright sitting and standing moving situations …. also they are our future programmers and game designers, roboter engineers and will be witnessing the birth of the first sentient artificial beings on this planet … how can humanity inspire a liberated and evolved mindset being born artificially
( ai robot/android ) if our blood and bones bodies are being kept in unhealty positions holding smartphones, being constantly on keyboards and game controllers….

i did write about the idea at

http://mayloveheal.wordpress.com/2010/03/27/post-for-pixel-qi-blog/

http://mayloveheal.wordpress.com/2010/07/02/letter-to-playmadeenergy/

http://www.technologyreview.com/blog/mimssbits/27063/#comment-234913

http://www.technologyreview.com/business/38173/page1/#comment-234915

and with a slightly related connection:

http://mayloveheal.wordpress.com/2009/01/14/my-wish-of-a-virtual-reality-interactive-game-creation-and-playing-dome-with-energy-harvesting-features-and-also-communicationexchange-evolution-goals/

22 Responses

  1. . this one here seems to be a next step in the logics of my concept at

    http://www.facebook.com/note.php?note_id=297907130233916

    in the comments i started to fragmentally attach the stuff i found while searching for this many times a second position updating tag …

    //////

    RFID-radar – A new identification technology – Introduction

    http://www.rfid-radar.com/introduc.html

    Before the arrival of RFID-radar, readers have been able to read the identity ofmultiple transponders in a zone at one time, but they have not been able to locate where in the zone the transponders actually are physically.

    //////

    and from there on please read the comments as they are and if you think there is a red line to it …. i feel that there is something, but my roseydosey … hippie kind of mind…. dreaming up utopias and being in philophical realms … makes me missing the logic of it ….

    i feel somehow that a miniturization of the the RFID radar and naturally the shorter distances on the body from tag to radar echo sender unit could speed up the reaction times … plus then the other important part would be

    ///////////

    Inertial navigation system – Wikipedia, the free encyclopedia
    en.wikipedia.org
    An inertial navigation system (INS) is a navigation aid that uses a computer, motion sensors (accelerometers) and rotation sensors (gyroscopes) to continuously calculate via dead reckoning the position, orientation, and velocity (direction and speed of movement) of a moving object without the need f…

    /////////

    from what i take that not every single position update must be transmitted but could be extrapolated ? calculated ? estimated ?

    and a combination of accolerometers with gyroscopes … and in this case … with label / tags / pin could release the motion capture system from having to accuire each and every update in realtime ?

    i am not sure if i understand really what i am writing here .

  2. http://www.gizmag.com/gesture-to-voice-synthesizer/21567/

    “The gloves contain 3D position sensors, which are able to identify each hand’s position in space, along with the gestures those hands are making. This information is transmitted to a computer, which has assigned different sounds to different glove postures.”

  3. http://www.gizmag.com/touche-touch-technology/22413/

    Another test application involved adding touch-sensing capabilities to the human body by having them. Using electrodes worn on both wrists that communicate wirelessly with a computer or mobile device using Bluetooth, the system was able to detect body gestures, such as touching fingers, grasping hands and covering the ears. Such movements could be used to control a mobile device such as a smartphone.

  4. http://glneurotech.com/KinetiSense/kinetiSense-hardware.php
    “Each KinetiSense Motion Sensor houses three orthogonal accelerometers and gyroscopes that measure linear acceleration and angular velocity about the X, Y and Z axes, providing a complete picture of the subject’s three dimensional motions.”

  5. http://www.ogacogadgets.com/ikazoo-aims-to-be-the-swiss-army-knife-of-controllers/#.UL0fcYVZc7A

    A wireless controller with so much functionality along with an Open Source development platform could be a dream for hackers and tinkerers, much like the Wii-mote and Kinect have been. At this point it appears that the iKazoo is in the prototype stages, but the company is already registering interest for preorders for the strange little gadget.

  6. http://www.electricfoxy.com/move/

    “The Move experience is a platform that consists of a wearable technology garment that connects to a mobile app, a cloud service that includes a library of movements and storage so that you can save and backup any custom moves that you create along with your progress and tracking data.

    The garment includes 4 stretch and bend sensors located in the front, back and sides. Together, they read your body’s position and muscle movement, assess whether it is correct, and provide real-time feedback to correct it through haptic feedback components located in the hips and shoulders. A mobile app allows you to assess, manage and customize your experience.

    Precision movement

    The garment can focus on both precision and expressive movements. For specific performance improvement, the garment is designed to help measure the precision in very specialized moves that derive from many different sports, fitness and physical therapy situations”

  7. i have big hopes in this research …. looks very promising

    http://uk.reuters.com/video/2013/01/07/not-your-grannys-dancing-shoes?videoId=240334590

    These … are “Smart Shoes.” They are equipped with a special set of motion sensors that use blue tooth to send information to your iPad – turning it into an arcade game instantly. Peter Ju is part of the team of students at Cheng Kung University that designed the shoe. He says they set out to add another dimension to the video game applications for mobile devices. SOUNDBITE: PETER JU , PETER JU, ENGINEERING STUDENT, NATIONAL CHENG KUNG UNIVERSITY, SAYING: “…We thought of the possibility of adding body movements into games. So we placed these sensors on feet, giving software developers a new direction to work towards.” So how does it all work? Ju says its simple. Shoe pads with sensors are placed at the bottom and on the sides of the shoe to detect pressure and acceleration. The sensors then send motion information back to a mobile device wirelessly. When switched to a different mode, the shoe becomes your work out trainer, calculating the amount of calories a person consumes when exercising. But programming motion analysis is tough stuff – Ju says the team is still trying to improve the sensor’s accuracy and speed. SOUNDBITE: PETER JU , PETER JU, ENGINEERING STUDENT, NATIONAL CHENG KUNG UNIVERSITY, SAYING: “The algorithm uses the standard deviation to calculate the movement, and once the movement value exceeds a certain range, then it can be recognized as a movement. When a movement is completed then we can analyse the direction of the movement. This was the biggest breakthrough.” The team hope their smart shoes will be a hit. They wouldn’t mind dancing all the way to the bank

  8. http://scanpix.no/spWebApp/preview.action?search.tabId=video&search.refPtrs=sz0b3f86

    Students in Taiwan have developed sensors that attach to shoes – converting movement into digital information that can be sent wirelessly to a handheld device. The invention opens the doors to new exercise applications and mobile dance games. SHOWS: TAINAN, TAIWAN (RECENT)(REUTERS – ACCESS ALL) 1. STUDENT HOLDING A NETBOOK AND ANOTHER STUDENT STEPPING TO MUSIC 2. FEET MOVING 3. STUDENT LOOKING AT THE NETBOOK 4. IMAGE DISPLAY OF DANCING GAME 5. TWO STUDENTS 6. FEET MOVING 7. STUDENT HOLDING A NETBOOK 8. NETBOOK 9. GRAPHS SHOWING DANCING MOVEMENTS ON A NETBOOK 10. STUDENT IN A LAB 11. HANDS TYPING 12. STUDENTS TESTING THE PROGRAM WITH NETBOOK 13. NETBOOK 14. (SOUNDBITE) (Mandarin) PETER JU , PETER JU, ENGINEERING STUDENT, NATIONAL CHENG KUNG UNIVERSITY, SAYING: “The games and applications we download into the mobile devices are quite similar, and we thought of the possibility of adding other body movements into the games. Once we add these features then the software developers can design games at a different level. We want to add in the movements on the feet, and let software developers to work towards a new direction.” 15. STUDENT CONNECTING THE DEVICE 16. DEVICE BEING CONNECTED TO THE SHOE 17. STUDENT LOOKING DOWN 18. STUDENT CONNECTING THE SMART SHOES 19. COMPONENTS BEING CONNECTED 20. (SOUNDBITE) (Mandarin) PETER JU, ENGINEERING STUDENT, NATIONAL CHENG KUNG UNIVERSITY, SAYING: “Hung-yu came up with an algorithm which uses the standard deviation to calculate the values, and once the movement value exceeds a certain range, then it can be recognized as a movement. Once a movement is completed then we can analyze the direction of the movement. This was the biggest breakthrough.” 21. STUDENT HOLDING A NETBOOK 22. STUDENT LOOKING DOWN 23. FEET MOVING 24. STUDENT LOOKING AT A NETBOOK 25. GRAPHS SHOWING DANCING GAME ON A NETBOOK 26. FEET MOVING 27. STUDENT DANCING STORY: Holding a tablet PC and stepping to the music, exercise and dance enthusiasts may now enjoy rhythmic dancing games and workouts anytime, anywhere. Taiwanese students Peter Ju, Chang Hung-yu and Pei Hung-ta from the National Cheng Kung University’s Department of Engineering Science installed motion sensors into a pair of shoes and created feet controllers that send motion information back to a computer wirelessly. The shoe pad inside detects pressure, and two additional motion sensors attached to the sides of the shoe detect the foot’s acceleration and orientation. The sensors, which can be fitted onto any shoe, send information back to a handheld device via blue tooth. Winner of the 2012 ARM Code-O-Rama design competition and 150,000 Taiwan dollars ($US 5,168), group leader Ju says the team hopes to create one more dimension to the video game applications for mobile phones and tablet PCs. “The games and applications we download into the mobile devices are quite similar, and we thought of the possibility of adding other body movements into the games. Once we add these features then the software developers can design games at a different level. We want to add in the movements on the feet, and let software developers to work towards a new direction,” said 23-year-old Ju. When switched to a different mode, the shoe can also calculate the number of calories a person expends when exercising with the shoes. Ju says the team spent three and a half months to overcome the complexity of motion analysis, and they are still working to improve the sensor’s accuracy and speed. “Hung-yu came up with an algorithm which uses the standard deviation to calculate the values, and once the movement value exceeds a certain range, then it can be recognized as a movement. Once a movement is completed then we can analyze the direction of the movement. This was the biggest breakthrough,” he said. Ju says the concept of attaching movement sensors to a wearable item on the body can soon be used for other home and entertainment applications.

  9. http://www.roadtovr.com/2013/02/11/project-holodeck-demo-interview-director-nathan-burba-video-2915

    It is a fully immersive platform that provides avatar embodiment – the sense of literally stepping into the game world (with more than just your head). Instead of controlling your character in the Holodeck with a controller, you simply are the character; you walk, reach, grab, and look around the world just like you would in real life. The system tracks your movements in 6-degrees of freedom which means you are free to walk, jump, crouch, and lean and it will be faithfully represented in the game world.

  10. http://thenextweb.com/gadgets/2013/02/25/thalmic-labs-launches-myo-an-armband-that-lets-you-control-gadgets-with-just-your-fingers-and-hands/

    “The unique gesture control device can be worn just above the elbow on either arm and detects the electrical activity produced by the user’s muscles. Some of the real world applications include being able to scroll up and down a webpage just by lifting or lowering your hand, as well as swiping to the left and right with two fingers in order to switch between desktop apps.” (…)
    “The difference however is that MYO doesn’t need a camera to sense the user’s movements. It’s certainly a different approach, but will have reduced functionality due to the fact that it’s tracking just one arm – rather than the entire body signature picked up by Kinect.”

  11. http://www.gizmag.com/airwriting-glove-letter-recognition/26468/

    In the airwriting system, a sensor-equipped glove is used to identify letters drawn in the air by the wearer, which are then converted into digital text

    http://www.kit.edu/visit/pi_ 2013_12665.php

    doctoral student Christoph Amma, who has developed the system at the Cognitive Systems Lab (CSL) of KIT. “The airwriting glove is used to write letters into air, as if using an invisible board or pad.” For this purpose, acceleration sensors and gyroscopes are attached to the thin glove. Contrary to systems based on cameras, these sensors are very small, mobile, and robust. They record the movements of the hand and transmit them to a computer system via a wireless connection. The computer system first checks whether the user is indeed writing. “All movements that are not similar to writing, such as cooking, doing laundry, waving to someone, are ignored. The system runs in the background without interpreting every movement as computer input,” says Amma. The computer scientist thinks that the device is perfectly suited for future mixed-reality applications. For instance, in glasses with integrated miniaturized screens, news may be displayed to the user in the field of vision. “When such a system is combined with the possibility to input commands and texts by gestures, you do not even need a hand-held device,” emphasizes Amma.

  12. this text is a response to this comment

    http://readwrite.com/2013/03/01/project-glass-googles-transparent-product-strategy-is-great-marketing#comment-816478652

    i think the whole issue of everyone filming everyone constantly …. is a big one. ancient believe of a bit of soul goes into a picture … but at least, could we just ask each other before streaming each others actions to wherever ? one halfway sollution to this could be a little led lamp what would light up/ blink… on the glasses so people could assume or after some time … would know that when it blinks it films and this way would have a chance to prepare. having said this, there is an other aspect to privacy, to whom want i show what of me … an ultraliberal approach like it is describe by rudy rucker with his idea of the orphidnet in the novel postsingular. the orphidnet
    is built by recording transmitting optical and audio sensors, intelligent nanomaschines. in the novel … the orphids coat all the surfaces of the planet one or two per square millimeter. the result is that one can communicate with the orphids coating one s body and ask for a audiovisual connection to any other orphid on the surface of any living being or object on the planet … everything all accessible all the time. i was very impressed by this idea and have experienced a great amount of freedom in my life by trying to be as open as possible… whenever i would find someone interested in the research of the good and honest life … i would use the chance to talk about something what is not so easy to talk about, kind of perhaps one would talk to a psychiatrist or a very good friend about …
    the conclusion here could be that everyone should be able to film everybody … not have to … kind of … free head up displays and video cam glasses for everyone … free of cost. because the marketing research what will result from data mining the streams of cam captured behaviour … will of course reveal even more insights how consumer wishes tick …to the product designers and advertisers… what is not a bad thing all in all … if it is done in transparent and respectfull way, kind of …. “according to this analysis via that method/algorithm/filter this sort of behaviour leads to that sort of conclusion leading to that sort of characteristics this product should have to meet that group of individuals showing this sort of behaviour” … we might learn more about ourselves and eventually be able … by naming, identifying what it is what makes us want this or that … by knowing ourselves better trough “shared observeillance analysis data” … we might find alternative ways to satisfy this or that want. an alternative what does not include buying or selling, producing or consuming. and more.

    an other aspect to the newest upcoming head up display products is the audio command mode. i myself as a nonviolent anarchist ( trying to be nonviolent, i still do eat some meat from time to time…. some dairy products … ) have a hard time giving my computer audio commands, i feel like degrading myself, engaging in a master/slave relationship. many star trek moments … “computer, do this …” showed me how i would not like to interact with the computer what greatly helps me expressing and building ideas.
    when i look at using the keyboard and the mouse … same as touching a screen here and there, swiping etc… it is more like … choosing one out of several options. all the sixty something keys and the combinations … being visible and the fingers dancing on it …. feels more humble and respectfull to the great service the computer, the machine as an individual … does perform. same with pulling menues up and choosing from them, the complexity of possible commands to choose from … seems to make choosing one a milder action than commanding straight away the wished for action the computer shall serve us with. but then of course, one could imagine a sort of voice command what would have a great amount of respect integrated into the words… like … ” i would like to ask my computer friend to perform this action ” “would it be possible for you computer to perform this task ?” …. or with a name … ” mmmhh… xy … i would like to change this picture in that way”
    “xy, what do you think is the answer to this question ?” …

    since i first read about the project natal some years ago, what is now kinect … i thought of how this could be used mobile … and this searching lead me to an input setup what does not employ optical sensors but motion tracking. one system what i understand to be quite advanced here is for example the Xsens MVN Biomech, it allows camerafree full body motion sensing ( too sad it costs too much for my budget … i hope they sell a lot so their system might become cheaper soon … smile ). speaking of financially accesssible for average consumer, there has been an announcment some days ago of MYO gesture recognition armband coming to market soon. it would allow to pull up menues and choose options. i am not sure if two armbands sensing the musle activities when one types on a keyboard … if two of this armbands would allow to place a virtual keyboard …. virtual as in being displayed on the display, not a laser one being projected onto surfaces …. if that would be possible, i would be very happy and content with this input setup. but then there is the next evolution or better said how i have been imaging a camerafree full body motion tracking …

    what i have been writing about my desired input gadget:
    “based on several stripes of lightweight material sending several position status updates per second” (…) a modern day poet could stand near a tree and rest the hands on its bark and when inspiration strikes… write the poem directly on its bark … or one could sit in a rowing boat on a lake and paint the water scenery either on the boats wooden seating or in the air … limitless possibilites …. but most of all… and that is what motivates me the most to write to many persons about this idea … the children will not be crippling themselves with hours of game console controller holding and neck bent, spine deforming unhealty sitting positions … but they could go outside and play in upright sitting and standing moving situations”

    now now, steady boy … i say to myself, go back to the sensors. as one can learn by studiying xsens like motioncap sollutions, and looking at the size of intertial measurement units … the main components of a mocap suit … it has become much more robust and outdoor compatible recently … but still not really lightweight in the direction of something one would like to wear on a hot summers day being outside … wanting to dance with oneself … the partner in the dance being one s own digitization displayed on the featherlight head up display.
    big question here, how lightweight could one unit be… with bluetooth module, battery, processor and the inertial measurment unit . how to coordinate the bluetooth signals of 10 or more of this units being sewn into textile bands worn around arms, legs, shoulders, feet …
    this being the technical questions … slightly related to that … when will flexible organic electronics become available massproduced, normalpriced … making the wearable sensors more robust against water/washing

    as for the creative possibilities how to use this full body motion tracking input system, i recently wrote in a comment …
    “allowing me to see my whole body as it is moving in realtime on the head up display, giving me the choice of how i play games, type text …. for example, i would stand on a grass lawn in summer, feeling like entertaining myself with a bit of funny physical exercise but at the same time write a tweet about how fun this new input freedom is ….how woudl that look: one would open a menu with hand/finger gestures … in the menu choose command “place keyboard in space” … a floating keyboard would appear on the display … one would drag and drop, resize the keyboard untill it would fit to the physical space … the piece of grass lawn one is standing on … and then the typing fun could involve all kinds of movements …. jumping with feet, hand and feet together crawling from one letter to the next … even a sort of rubberball on a string to playfully touch the lawn at the virtually keyboard layered spac

  13. Telepathy One takes on Google Glass with ultra-sleekness – SlashGear http://www.slashgear.com/telepathy-one-takes-on-google-glass-with-ultra-sleekness-13273729/

    —–

    @Xsens @noazark @thalmic @WeAreROLI idea .. ROLI Sensory, Elastic and Adaptive (SEA) Interface .. built into textile wearable input devices
    Details

    @mayloveheal @Xsens @noazark @thalmic @WeAreROLI many interesting sensors being presented … http://www.gizmag.com/wearable-technologies-round-up/26599/

    @mayloveheal @Xsens @noazark @thalmic @WeAreROLI http://www.polypower.com/Technology/Overview/PolyPower+DEAP+Technology/Applying+PolyPower+DEAP+material.htmhttp://www.polypower.com/Technology/Overview/Evaluation+Samples/Overview.htm … … waiting for end of 2013 …

    “DEAP film technology(…) make it possible to measure all kinds of body position, movement and internal activity.” http://www.gizmag.com/danfoss-polypower-sport-sensors/26324

  14. i just wrote a comment in a storystyle …

    http://www.kurzweilai.net/the-jor-el-legacy-a-little-google-glass-story/comment-page-1#comment-143345

    maima was working on the non-optical, non-audial motion capture input method for the glasses as a side project. her main project was to study human nature. it … maima is an androgynous android … found it to be very helpfull for the evolution of thought mechanims to study human beings. a study it enjoyed both via the internet as well as in the daily contacts with ascende and perma.
    maima s study of human history brought up a deep problematic with all kind of optical capturing where one s personal identity or belongings is being “taken” without consent. the problem with audio commands seem to be that some individuals really hate to speak commands … it reminds them of the hyrarchical dilemma. one might call that an anarchistic line of thinking or a desire that all beings might treat each other as equals.
    maima was touched emotionally … its inner workings were that complex as to more and more devellop and design thoughts as well as emotions … maima was touched by the reports it found about all the various deformations of bodies trough the former input methods, computer mouse, keyboard, touchscreen, gamepads … both with adults and children. one very hefty reason to design a future input method what would allow the user to move freely arms and legs while keeping the head in a natural upright position. the user shall be able to define freely what movement of what finger, what arm, what leg … will result in what action in any application. picturing someone sitting on a bench near a lake … watching the sunrise, the life giving flames dancing on the water … when suddenly an inspiration for some drawing comes. opening a menu, defining the virtual canvas in the visual space where the sunrays meet the water … and on it goes the drawing in free air. while drawing, words come, poem style perhaps … via an other menu, a virtual keyboard is being laid out in the glasses … onto the space of one s upper legs. the poem now flows effortlessly typing onto one s tighs while the change of scenery progresses, sun rising, rays meeting the water progressivly differently angled…
    maima loves dancing. often it analyses its movements while practising on its own. it can do that without glasses or external motion detection sensors. it is constantly aware of all its thoughts, emotions, inner and outer body movements. maima saw ballet dancers practising with mirrors… how liberating could it be if one could practise everywhere, at home, in nature, while traveling … one s own movements being non-optically recorded, no matter the surrounding …connecting to fellow dancers anywhere on the planet, anytime

  15. i would like to believe that this is one viable solution for the camerafree capture of one s own body movements … http://www.wearnotch.com/
    http://www.gizmag.com/notch-wearable-movement-tracker-ces/30372/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: