*미디어랩,120개 판넬의 3차원 디지털 인터렉티브[ MIT media lab ] recompose

반응형
사용자의 실질적인 터치, 체스쳐에 반응하는 3차원 리얼타임 디바이스를
미디어랩에서 연구 발표 했네요.
미디어랩은 여러분이 많이 아시는 것처럼 전세계 미디어의 첨두에 있는
곳이지요.
아직까지는 2차원적인 자판과 마우스이지만 곧 있으면 마이너리티 리포트에
나오는 것처럼 새로운 패러다임의 입출력의 시대가 올 것 같네요.
우리 멤버와도 이것에 대해 심도 있게 이야기 나눈 적이 있는데
미래에는 보이지 숨겨진 하이테크 놀러지의 시대
인간중심으로 기술이 발달하여 기계적이지 않은
감성적이며 직관적인 시대로 바뀔거라
이야기 나누었던 기억이 있네요.
인간을 위해 만든 기계를
인간을 이롭게 써야죠

recompose', the work of matthew blackshaw, anthony devincenzi, dávid lakatos, daniel leithinger,
and hiroshi ishii at america's MIT media lab, integrates user input and visual output into a single device.
responding to both physical touch and also gestural interaction, the system offers new ways
of interacting in realtime with three-dimensional representations.




'recompose' consists of 120 physical tiles, mounted on small rods that rise or sink in response to user input
as on a typewriter. in addition to responding to direct presses, however, the keys of 'recompose' also react to gestural input,
such that moving one's fingers over keys or making the gesture of pulling up or pushing down will cause the same effect.

the device currently recognizes five kinds of user behaviour:
'selection' involves the projection of light onto the device's surface;
the 'actuation' gesture will raise the selected keys;
and 'translation', 'rotation', and 'scale' interactions all modify the selected input accordingly.

not merely a user interface, 'recompose' can also be used for visualization,
such as the three-dimensional display of graphs,
or the representation of the relative 'pressure' of user gestural input.

because it is at once input and output device, 'recompose' offers completely new modes of visualization functionality,
such as the ability to transform via gestures three-dimensional changes in the surface.
although for the current model, this can produce only the scaling, rotation, and other manipulations of simple shapes,
one can easily imagine the ways in which a more finely detailed surface might provide three-dimensional interactive visualizations
of modeling projects, CAD designs, or other visual and infographic data.

the device is based on team member daniel leithinger's 'relief' table, a similar input/output device for tactile input.
to this basic model, 'recompose' adds a depth camera and projector above the table. with input from the camera,
computer vision detects user interaction and determines the position, orientation, and depth of the hands and fingers
and relays the desired changes to the individual keys.

concept diagram of gestural interactions, clockwise from top left: selection (2 images), actuation, translation, scaling (both images), and rotation (both images).

the view of input as modeled through computer vision

from  designboom
그리드형

댓글

Designed by JB FACTORY