Behavioral Mapping
We observed people’s movements walking around the North Avenue Courtyard. We recorded twenty different people’s paths. Image 1 contains a compilation of all twenty paths. Image 2 contains each path separated by layer, so you can see the path individually.
What paths are used most?
The paths that are most used generally follow along the sidewalks around the courtyard starting from the door of a building towards the bus stop. There are two, well worn paths in the grass of the courtyard from the North Ave East (NAE) front door towards the bus stop or towards the North Ave South (NAS) front door. Most of the paths intersect at the balcony before the bus stops. Some people also came from the direction of the North Ave North and West buildings.
What spots are vacant or unused?
The most vacant, unused spots are the courtyard itself. Aside from the worn down grass where people cut through the courtyard, most of the courtyard remains untouched. In the satellite image, you can see vaguely where grass has been worn away from continual use.
Where do people come together in groups?
In Image 3, I recorded any location where a congregation of more than two people occurred. At the bus stops, most groups of people would congregate to wait for the bus or for a ride from a friend picking them up. Congregations also occurred by the front doors of NAE and NAS. This was usually due to people either waiting to be let into the building by their friends or people sitting outside to smoke together.
Where might you put the marker?
A marker would best be placed close to the balcony overlooking the bus stops. At this balcony, people from all directions (various sidewalks and courtyard path), intersect as they decide whether to take the red bus or the blue bus. By placing a marker on the courtyard close to the balcony and the courtyard path, it would attract people’s attention as they walk by while also not obstructing traffic. For those who are interested enough in the marker, it is not a far walk from the sidewalk to reach the marker.
How do you expect people to navigate your installation?
The marker placed in the courtyard would simply be used to notify people of where to stand for the Catch King Kong game. The North Avenue East and South buildings always remain static, so the buildings themselves could be used as markers for tracking the AR technology. We could also construct models of the two buildings, so the objects in the game know how to interact with them. By using markers in the courtyard to limit users’ positions to a static location, it will make it easier for the tracking technology to know exactly where the user is in relation to the buildings.
Image 1
Image 2 (Animated .gif)
Image 3
Error Analysis of Augmented Reality
Problems:
1. Marker is not read correctly
The main reason that a marker is not read correctly is due to location. That is to say, if the user of the AR device is standing 100 feet away from a marker that is painted on the ground the marker is going to be distorted from his point of view. As he gets closer to the marker it will gradually become less and less distorted, restricting his use of the marker to a general vicinity around the marker. This can be circumvented by placing the marker up higher, such as in the air, but still the idea of being restricted to using a marker when you are relatively close to it will prove a problem for anyone trying to use a large space in their AR project.
2. Marker falls out of view / screen goes blank
There is currently not a large integration of GPS, compass, and marker capabilities. By using all three together triangulation has the possibility to become much more accurate. If somebody was to hold their AR device towards a marker on the side of a 10 story building, the software should be able to triangulate their exact location via the viewing angle and relative size of the marker. By also incorporating the compass technology all the information about the user's exact location should be able to be programmed at pinpoint accuracy. Once the location of a person is determined, too, then he should be able to move the marker out of his AR device's viewing window and still have things display (via the use of the compass). Currently the problem seems to reside in the fact that programmers are relying on either markers or a combination of GPS and compass instead of integrating all three technologies together or using only markers in tandem with compass capabilities.
3. Rendering software does not work fast enough to keep up with movement
This problem is derived primarily from the programming standpoint as the limitations of any given environment can and should be taken into account when producing a piece of AR software. As we have seen in class with the power AR piece the model that pops up when the marker is read has so much processing going on that it cannot display fluidly, especially if the user moves the marker and angles it differently. The simple fix to this problem is to keep in mind the processing capabilities that the environment a program is programmed in and proceed accordingly.
4. General coding errors
As there always seems to be with any sort of computational software, bugs and errors abound. This is more a problem with all computing instead of just with AR, but taking it into account seems reasonable.
5. Information displayed on screen does not seem to line up with real world very well
When the real world changes the program must change as well to reflect it. If it does not then the information displayed on the AR device may not flow as fluidly as it previously did. For instance if somebody is using a building's model in their AR program and they do not account for a new construction project that creates a new atrium on the side of the building, their virtual model and the real world model no longer match. Thus any interaction that the user may have with that real space in the virtual world will not seem as fluid as it would if the project was taken into account. In addition, as can be seen in many youtube videos, actual buildings can be used as markers. These markers signal the software to load a virtual model of the building. If the virtual model is not made to scale or is made poorly then obstructions of view in the virtual world may seem out of place or not lined up correctly.
6. Inaccuracy of GPS and compass functions
The accuracy of GPS functions in smartphones is currently relatively accurate but is not accurate enough to suit the needs of AR. They currently can triangulate a user's position to some area around a block in size. When there are so many different interactions that are possible within that block alone, just knowing that the user is standing within it is insufficient. Also, until recently the compass functions and altimeter functions of smartphones could get two cardinal directions confused. If a user was holding his phone facing directly north the software knew that he was either facing directly north OR directly south instead of knowing exactly where he was facing. Although there is now technology that knows exactly where the user is facing it is still not widespread in everyday cell phones.
7. Public gesture of holding up an AR device in front of you
Holding up a cellphone in public closely resembles the act of taking a picture in public. In fact many people hold up their cellphones in public to TAKE a picture. This action results in those around changing their actions to suit yours (or not changing them and running the risk of upsetting you). People will generally try to avoid walking directly in front of somebody with their camera or phone held up directly in front of them. Should another common use of holding a cellphone up in front of you become AR use that does not require people to avoid your line of sight, camera users may cry in dismay. The confusion that could result from a large amount of people using AR on the streets on a regular basis may be hindering towards society and pedestrians.
8. Holding a device directly in front of you is not a comfortable position
This one is pretty straight-forward. Holding a device directly in front of you at arm's length is not a comfortable position. After a while it may even strain muscles. If holding a device directly in front of you is a requirement of this AR technology, it is doubtful that many people will use it for extended periods of time.
Activity Analysis
List or represent in detail all tasks, actions, objects, performers, and interactions involved in the process of interacting with your installation. Based on this list, identify the components that you need to design in relationship to your concept.
Performers:
The Player[s): people will interact with our installation on a one-on-one basis. Though multiple people may play at the same time, their interaction with each other will be solely on a 'scoreboard' basis. Our players will be primarily Georgia Tech undergraduate North Avenue Apartment residents from the ages of 18-24.
Objects:
- iPhone: the medium through which we will display our AR project; the iPhone has touch screen, GPS and compass capabilities.
- information plaques: these will be placed at strategic points (based on behavioral analysis) in the North Avenue courtyards
- markers/buildings: we will be using the North Avenue Apartments as our “markers” and using virtual building models as a frame to form the character animations
Tasks/Actions/Interactions:
- The player must first download the iPhone application to their phone. This will be done via accessing the iTunes store on either the computer or phone.
- Players may begin the game from any point within North Avenue, but the information plaques will hold information on where the best vantage points to play are.
- Once the user selects the iPhone application on their phone, the game display will instruct the user to move forwards/backwards/etc in order to get one of the buildings into frame.
- The user must have a building marker in frame in order to begin the game.
- The player must choose from several game modes (possible modes include Quick Game, Easy/Medium/Hard Time Mode or Easy/Medium/Hard Item Mode) and also must choose their opponent (ex. King Kong, Godzilla, etc).
- Once the player is positioned and has chosen their game mode, they will press ‘Begin’ to start the game.
- The opponent will begin to move about the North Avenue Apartment buildings. The player must keep the opponent on the screen.
- The timer will start when the player presses begin, and pause every time the creature goes off-screen.
- If in Item Mode, the player may also see various items (slow down, plus size, etc icons) in the windows of the apartments. The player can grab these items by touching them on their screen.
- The player may then use the items by selecting the item at the bottom of their screen and then touching the opponent on the screen.
- Once the game has ended (differently based on different modes) the player may choose to upload their score to the general scoreboard or keep it for their personal records on the iPhone application.
Components (to be designed):
- iPhone application (UI design and game modes)
- 3D opponent models w/ animation
- 3D virtual building models (match w/ building markers)
- information plaques/signs
Research Conclusions
The behavioral mapping of the space gave our group a good sense of where to place informational plaques/flyers such that potential players may encounter them and decide to download the game. It also let us know which areas are out of the way of public traffic, which are good playing areas. Information plaques/flyers would be placed around the busy areas and indicate places that would be ‘vantage points’ for playing the game.
Error analysis led us to look into alternative technology choices for our game. As we aim to use the entire building, and players would be quite a distance away, the simple black & white marker squares were ruled out as an option. We are using a combination of new technologies including 3D models and the new compass feature of the iPhone to bypass some of the possible errors we encountered in our research. In addition, we encountered the possible problem of holding the iPhone out in front of you. While we cannot completely eliminate this feature, by putting the ‘play space’ in a less public area it makes it less awkward for users and by having the ability to pause game play or play a ‘quick’ game we can help lessen the risk of players arms getting tired.
Finally, our activity analysis led to the further development of the actual game play. During the analysis, we were forced to decide on specific features that we want the game to include and we came up with several new features for the game including power-ups and game modes. This shaped the user interface design sketches for our use case scenario seen above. The activity analysis also played a part in deciding the technology that we would use (discussed above as well).
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment