Please enter your username below and press the send button.A password reset link will be sent to you.
If you are unable to access the email address originally associated with your Delicious account, we recommend creating a new account.
This link recently saved by rsato on September 09, 2010
The GestureWorks multitouch authoring environment for Flash and Flex ships with a library of unique gestures. This gesture library is built upon an open source gesture framework, allowing developers to customize and extend the “gesture object” to create support for new gestures. This gives the developer the flexibility to customize gestures to suit the project, and, as new gestures become common, they can be easily incorporated, helping to “future proof” your applications.
This link recently saved by rsato on September 26, 2009
It all started while we were researching an article on future user interfaces. Touch interfaces are hardly futuristic at this point, but multi-touch hardware like the Microsoft Surface or the iPhone is just starting to become a big deal, and we decided to see what big things are going on in that field. What we found that surprised us the most wasn’t anything about the future of multitouch; it was about something that people are doing right now.
This link recently saved by rsato on June 20, 2009
PyMT, as the name implies, is a Python-based framework. I’ve gotten to know Nathanaël Lécaudé, a talented artist and coder who was nice enough to put me up a couple of nights while I was in Montreal; he’s one of several core coders. They’re doing a lot to really encapsulate functionality in widgets in a nice way. Features of PyMT include an event framework, specialized widgets for gesture, touch, and layout, and connections to OpenGL, OpenGL shaders, and sound. You can even work with the enduring, evergreen synthesis language Csound using its Python bridge, the oddly-named but powerful Ounk.
This link recently saved by rsato on April 22, 2009
We’ve talked in the past about the idea of user interfaces and visual output merging. Instead of a UI on one screen and visuals on another, the idea is that the interface itself melds into the output. I can think of few better examples of how this begins to evolve than a video recently posted on Vimeo by user nucode. Working with a projected, camera-tracked multi-touch interface and audiovisual loops in custom Flash-based software, nucode manipulates samples as though on an alien, futuristic interface.
This link recently saved by rsato on April 19, 2009
- Occlusion (what content is blocked by hands or other people)
- Button size (Fitts' Law) - Text size create (legible from 4' away)
- Usable from multiple directions
- Highly visual: The less words and the more specific the imagery, the easier it will be for a broad demogrphic to use the device.
- No irrelevant information (Anyone ordering olives in their Bailey's simply needs to be cut off)
- Transitions: When working on your state/transition diagram, Transitions should be planned carefully. The way content and controllers animate can do a huge amount to suggest what they do, how to use them, and whether touch events have been registered.
- Strong progressive disclosure
- Modeless controls
- Privacy (what and when should various users have access to what information, private pin #'s, etc.
- Provide a visual cue that all touches have been registered...even when they don't trigger a state change.
This link recently saved by rsato on May 17, 2008