Multimodal exploration of the fourth dimension
Hanson, A. & Zhang, H.
http://www.cs.indiana.edu/~huizhang/viz05.pdf
DOI : 10.1109/VISUAL.2005.1532804
abstract:
We present a multimodal paradigm for exploring topological surfaces embedded in four dimensions; we exploit haptic methods in particular to overcome the intrinsic limitations of 3D graphics images and 3D physical models. The basic problem is that, just as 2D shadows of 3D curves lose structure where lines cross, 3D graphics projections of smooth 4D topological surfaces are interrupted where one surface intersects another. Furthermore, if one attempts to trace real knotted ropes or a plastic models of self-intersecting surfaces with a fingertip, one inevitably collides with parts of the physical artifact. In this work, we exploit the free motion of a computer-based haptic probe to support a continuous motion that follows the local continuity of the object being explored. For our principal test case of 4D-embedded surfaces projected to 3D, this permits us to follow the full local continuity of the surface as though in fact we were touching an actual 4D object. We exploit additional sensory cues to provide supplementary or redundant information. For example, we can use audio tags to note the relative 4D depth of illusory 3D surface intersections produced by projection from 4D, as well as providing automated refinement of the tactile exploration path to eliminate jitter and snagging, resulting in a much cleaner exploratory motion than a bare uncorrected motion. Visual enhancements provide still further improvement to the feedback: by opening a view-direction-defined cutaway into the interior of the 3D surface projection, we allow the viewer to keep the haptic probe continuously in view as it traverses any touchable part of the object. Finally, we extend the static tactile exploration framework using a dynamic mode that links each stylus motion to a change in orientation that creates at each instant a maximal-area screen projection of a neighborhood of the current point of interest. This minimizes 4D distortion and permits true metric sizes to be deduced locally at any point. All these methods combine to reveal the full richness of the complex spatial relationships of the target shapes, and to overcome many expected perceptual limitations in 4D visualization.
Showing posts with label Hyper-spatial Minds. Show all posts
Showing posts with label Hyper-spatial Minds. Show all posts
Friday, February 8, 2013
Friday, November 23, 2012
Mathematical Fiction: Geometry
From Alex Kasman's Mathematical Fiction database, stories about topology, geometry, or trig.
http://kasmana.people.cofc.edu/MATHFICT/search.php?go=yes&topics=gtt&orderby=title
http://kasmana.people.cofc.edu/MATHFICT/search.php?go=yes&topics=gtt&orderby=title
Wednesday, June 6, 2012
Wednesday, March 28, 2012
Tuesday, September 13, 2011
Wednesday, September 7, 2011
New Scientist TV: One-Minute Physics: How do we know our world is 3D?
New Scientist TV: One-Minute Physics: How do we know our world is 3D?:
How do we know we live in three dimensions? In this One Minute Physics episode, animator Henry Reich explores the concept of multiple dimensions and shows one way to test that we live in a 3D world.
Monday, November 29, 2010
Naked Hypercube
Naked Hypercube
Tesseracts can be made from, among other things, the unclothed. NSFW? Don't know/don't care.
Saturday, November 13, 2010
Thursday, August 5, 2010
Friday, July 9, 2010
ipercubo-una-rotazione-simultanea-del-tesseratto
http://connectingtometaverse.tumblr.com/post/781460669/ipercubo-una-rotazione-simultanea-del-tesseratto
Wednesday, May 19, 2010
Hypercube
Quotes:
The user can rotate around the hypercube, or perform direct-manipulation rotations in 4D.\n\nFor a 4D rotation, the 3D vector described by the dragging of the mouse in the plane of the screen combined with the 4D unit vector (0 0 0 1) specify two basis vectors of a four-dimensional plane of rotation.\n\nThis is a lot more intuitive than a set of sliders.\n\nBefore I show an example of the 4D rotation, wrap your head around this simple 3D rotation of a regular old cube.\n\n
This message was sent to you by petemandik via Diigo
Friday, October 2, 2009
Hinton's Cubes for Visualizing Hyper-solids
Thursday, June 25, 2009
Plasticity and the Perception of Higher Dimensions
Questions arise as to how to train someone to pull this off. What does it even mean for vision to take place in higher dimensions? It might be useful to think about the geometry of photography for a bit here. The photography of a three-dimensional object involves a projection onto a two-dimensional surface. Stereoscopy is accomplished by integrating projections from a single 3-D object onto two different 2-D surfaces. By analogy, photography in the fourth dimension would involve the projection of a hypersolid onto points in a 3-D volume. 4-D steroscopy ("Hyperscopy"?) would then involve, I guess, the integration of projections of a single 4-D object onto points in different 3-D volumes.
One might question whether a person, being only 3-D, could possibly accompish hyperscopy, given that our irritable surfaces--our retina, etc.--are essentially only 2-D. The key to realize is that the dimensionality of our sensor arrays is potentially surmountable. The points in, e.g. our retina, can be mapped onto points in a volume--this is precisely what enables plain-old steroscopy in the first place. And our brains are capable of representing higher-dimensional state-spaces: gustory state-space is at least four-dimensional and olfactory state-space is six-.
Here then, in theory, is how to train someone to be hyperscopic. First, off, the 4-D objects are going to have to be computer generated. Second, computer simulated 3-D retina--3-D arrays of voxels--will be projected onto by the 4-D objects. Third, information from each of these voxels will be projected--via video goggles--to a dedicated portion of the person's visual field. That is, the visual field will be partitioned into the same number of subregions as there are voxels in the 3-D computer-simulated retina. Fourth, equip the person with some means of rotating the 4-D objects (since having control over inputs seems to be important in perceptual plasticity). Fifth, train the person to perform 4-D object recognition tasks. Objects in the training set should include objects that can only be distinguished by their 4-D charactersitics.
If such a training regime could be successful executed, would it be 4-D vision? Would the hyperscopist have 4-D qualia?
Subscribe to:
Posts (Atom)