Saturday, 16 November 2013

CUDA

CUDA™ is a parallel computing platform and programming model invented by NVIDIA. It is a Proprietary technology for GPGPU programming from Nvidia. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).CUDA is  Not just API and tools, but name for the whole architecture.

CUDA or Compute Unified Device Architecture is CUDA is the computing engine in Nvidia graphics processing units (GPUs) that is accessible to software developers through variants of industry standard programming languages. Programmers use 'C for CUDA' (C with Nvidia extensions and certain restrictions), compiled through a PathScale Open64 C compiler, to code algorithms for execution on the GPU. CUDA architecture shares a range of computational interfaces with two competitors the Khronos Group's OpenCL and Microsoft's DirectCompute. Third party wrappers are also available for Python, Perl, Fortran, Java, Ruby, Lua, Haskell, MATLAB, and IDL, and native support exists in Mathematica.(Mathematica is a computational software program used in scientific, engineering, and mathematical fields and other areas of technical computing)

CUDA gives developers access to the virtual instruction set and memory of the parallel computational elements in CUDA GPUs. Using CUDA, the latest Nvidia GPUs become accessible for computation like CPUs. Unlike CPUs however, GPUs have a parallel throughput architecture that emphasizes executing many concurrent threads slowly, rather than executing a single thread very quickly. This approach of solving general purpose problems on GPUs is known as GPGPU.

In the computer game industry, in addition to graphics rendering, GPUs are used in game physics calculations (physical effects like debris, smoke, fire, fluids) examples include PhysX and Bullet. CUDA has also been used to accelerate non-graphical applications in computational biology, cryptography and other fields by an order of magnitude or more. An example of this is the BOINC distributed computing client.CUDA provides both a low level API and a higher level API. The initial CUDA SDK was made public on 15 February 2007, for Microsoft Windows and Linux. Mac OS X support was later added in version 2.0,which supersedes the beta released February 14, 2008.CUDA works with all Nvidia GPUs from the G8x series onwards, including GeForce, Quadro and theTesla line. CUDA is compatible with most standard operating systems. Nvidia states that programs developed for the G8x series will also work without modification on all future Nvidia video cards, due to binary compatibility.





With millions of CUDA-enabled GPUs sold to date, software developers, scientists and researchers are finding broad-ranging uses for GPU computing with CUDA. Here are a few examples:

Identify hidden plaque in arteries:  Heart attacks are the leading cause of death worldwide. Harvard Engineering, Harvard Medical School and Brigham & Women's Hospital have teamed up to use GPUs to simulate blood flow and identify hidden arterial plaque without invasive imaging
GPU-PerfStudio

GPU PerfStudio is a real-time performance analysis tool which has been designed to help tune the graphics performance of your DirectX 9, DirectX 10,DirectX 11 and OpenGL applications. GPU PerfStudio displays real-time API, driver and hardware data which can be visualized using extremely flexible plotting and bar chart mechanisms. The application being profiled maybe executed locally or remotely over the network. GPU PerfStudio allows the developer to override key rendering states in real-time for rapid bottleneck detection. An auto-analysis window can be used for identifying performance issues at various stages of the graphics pipeline. No special drivers or code modifications are needed to use GPU PerfStudio.

GPU PerfStudio  gives developers control with seamless workflow integration. Spend more time writing code and less time debugging. Identify performance and algorithm issues early in the development cycle, and meet your quality and performance goals.
Key Features:
  • Integrated Frame Profiler
  • Integrated Frame Debugger
  • Integrated Shader Debugger with support for DirectX™ HLSL and ASM
  • Integrated API Trace with CPU timing information
  • Client / Server model
  • GPU PerfStudio 2 Client runs locally or remotely over the network
  • GPU PerfStudio 2 Server supports 32-bit and 64-bit applications
  • Supports DX11, DX10.1, DX10 and OpenGL 4.0 applications
  • No special build required for your application
  • Customizable Client GUI, define and save your own window layouts
  • Drag and drop your application onto the server to start debugging
  • No installation required – copy and run anywhere – your settings go with you.
Integrated tools:
GPU PerfStudio  integrates four tools that are key for the contemporary graphics developer;
  • Frame Debugger: The Frame Debugger gives you access to the drawcalls within your application and allows you to view their states and resources. It is possible to pause your application on any given frame and analyze the elements that make up the current frame buffer image. The user may scrub through the draw calls to locate the draw call of interest. The Frame Debugger specializes in viewing the active state for any draw call in a frame and has specialized data viewers for image and description based data. Each data viewer is a dockable window that can be placed and resized by the user to make custom layouts that can be saved and reloaded.
  • Frame Profiler: The Frame Profiler provides both a simple overview of the current frame profile – along with more in-depth analysis tools. The initial overview allows you to determine if your application is bound by the CPU or GPU. The in-depth analysis provides access to individual counters and allows you to save custom selections for specialized workflow.
  • Shader Debugger: The Shader Debugger allows you to debug HLSL and ASM Pixel, Compute, and Vertex Shaders inside your application. It allows you to step through your shaders one line at a time and view the registers, variables, and constant values at each pixel. It is even possible to insert breakpoints in the code so that you can quickly jump to a particular line and start debugging. To aid in understanding the flow control of your shader, a Draw Mask image visualizes which pixels were written by the previous instruction.
  • Shader Editing: PerfStudio 2.5 introduces shader editing as a new feature to help the developer author and debug shaders from inside a running applications. The user is able to edit DirectX11 HLSL code in the shader Code Window, re-compile it using the original or modified compiler flags, and insert the new shader into the application being debugged. This can significantly speed up the edit/save/app-restart cycle as multiple edits can applied in one debug session without having to restart the app or the debug tools. Re-insertion of the modified shader into the running application allows the user to immediately see the results of their edits and quickly assess their impact. Coupled with the profiler it is possible to measure the performance impact of an edit by doing before and after edit profiles and comparing the results.
  • API Trace: The API Trace allows you to see all the API calls made by your application in a single frame. If your application uses Direct3D markers the API Trace will use them to create a navigation tree to help you explore the trace.
This screen shot shows the Frame Profiler and, Frame Debugger in use at the same time. In this scenario the profiler was used to identify an expensive draw call. The draw call was selected in the blue list on the right hand side causing the Frame Debugger to jump to that draw call. The vertex and index buffer, the texture assets, and depth buffer for this draw call are currently displayed. The pixel shader code can be stepped through where the relationship between the code and assets can be thoroughly explored to identify costly aspects of the shader.
GPU PerfStudio 2.9:
GPU PerfStudio 2.9 is a fully featured Performance Tool with Integrated Frame Debugger, Frame Profiler and Shader Debugger.

  • This release focuses on improving stability of the product and fixes critical issues in Frame Capture and the Frame Debugger
  • Several issues with Vertex Shader debugging have also been resolved
  • The Shader Debugger constants table can now be saved to disk

Thursday, 24 October 2013

Time Management Quadrant

Time Management Quadrant
The Time Management Matrix - Proposed by Dr. Stephen R. Covey

  • Important activities have an outcome that leads to the achievement of your goals, whether these are professional or personal.
  • Urgent activities demand immediate attention, and are often associated with the achievement of someone else's goals.

  • Urgent and Important


    There are two distinct types of urgent and important activities: Ones that you could not foresee, and others that you've left to the last minute.
    You can avoid last-minute activities by planning ahead and avoiding procrastination.
    Issues and crises, on the other hand, cannot always be foreseen or avoided. Here, the best approach is to leave some time in your schedule to handle unexpected issues and unplanned important activities. (If a major crisis arises, then you'll need to reschedule other events.)

    Urgent and Not Important


    Urgent but not important activities are things that stop you achieving your goals, and prevent you from completing your work. Ask yourself whether these tasks can be rescheduled, or whether you can delegate them.
    A common source of such interruptions is from other people in your office. Sometimes it's appropriate to say "No" to people politely, or to encourage them to solve the problem themselves Alternatively, try scheduling time when you are available, so that people know that they can interrupt you at these times 

    Not Urgent, but Important


    These are the activities that help you achieve your personal and professional goals, and complete important work. Make sure that you have plenty of time to do these things properly, so that they do not become urgent. And remember to leave enough time in your schedule to deal with unforeseen problems. This will maximize your chances of keeping on schedule, and help you avoid the stress of work becoming more urgent than necessary.

    Not Urgent and Not Important


    These activities are just a distraction, and should be avoided if possible. Some can simply be ignored or cancelled. Others are activities that other people may want you to do, but they do not contribute to your own desired outcomes. Again, say "No" politely, if you can.
    If people see you are clear about your objectives and boundaries, they will often not ask you to do "not important" activities in the future.
    Source: Covey's 7 Habits of highly effective people & Mindtools

    Hyper Loop - The Future Transport

    Hyperloop

    A Hyperloop is a theoretical mode of high-speed transportation sketched out by serial entrepreneur elon Musk. Musk envisions the system as a 'fifth mode' of transportation: an alternative to Boats,aircrafts,automobiles and trains. Musk, who has expressed his intent to develop a prototype hyperloop, stated that it "could revolutionize travel", but the technological and economic feasibility of the idea has not been independently studied.
    A hyperloop would be, according to Musk, "an elevated, reduced-pressure tube that contains pressurized capsules driven within the tube by a number of linear electric Motors
    Musk and a small group of Telsa and SpaceX engineers released an alpha-level design in August 2013. The alpha design calls for a capsule that would ride on a cushion of air forced through multiple openings on the capsule's bottom. The design proposes "a combination of active and passive means to reduce the negative effects of choked airfolw
    According to the initial alpha design, released on August 12, 2013, a hyperloop would enable travel from the Los angles region to the San Francisco Bay Area in 35 minutes, meaning that passengers would traverse the proposed 354-mile (570 km) route at an average speed of just under 598 mph (962 km/h), and a top speed of 760 mph (1,220 km/h). 
    If it is in India, It would take approx. three hours to travel from Extreme north to Extreme south( Kanyakumary). Ofcourse non-stop :P Hope this comes true. 
    [Source: Wiki]

    Tuesday, 22 October 2013

    UEFI

    UEFI
    The Unified Extensible Firmware Interface (UEFI) is a specification that defines a software interface between an operating system and platform firmware. UEFI is a more secure replacement for the older BIOS firmware interface, present in all IBM PC-compatible personal computers, which is vulnerable to bootkit malware.

    The original EFI (Extensible Firmware Interface) specification was developed by Intel. In 2005, development of the EFI specification ceased in favour of UEFI, which had evolved from EFI 1.10. The UEFI specification is being developed by the industry-wide organization Unified EFI Forum. UEFI is not restricted to any specific processor architecture and can run on top of, or instead of, older BIOS implementations. UEFI is a community effort by many companies in the personal-computer industry to modernize the booting process. UEFI capable systems are already shipping, and many more are in preparation. During the transition to UEFI, most platform firmware will continue to support legacy (BIOS) booting as well, to accommodate legacy-only operating systems.

                   The UEFI specification defines a new model for the interface between personal-computer operating systems and platform firmware. The interface consists of data tables that contain platform-related information, plus boot and runtime service calls that are available to the operating system and its loader. Together, these provide a standard environment for booting an operating system and running pre-boot applications. 


    Extensible Firmware Interface's position in the software stack

    It all has begun when Intel decided to develop 64-bit CPU. They made decision which was very good logically, but unfortunately not as good market-wise: to get rid of all ancient x86 features, drop entire x86 backward portability, and create completely new CPU architecture, named Itanium (IA64). That also meant that old BIOSes won't be running on it, and so opportunity opened for new standard interface between OS and hardware/firmware. This is how first steps took place in the half of 90s, to replace BIOS by new standard, called Extended Firmware Interface (EFI).


    Later, AMD created its own 64-bit architecture called AMD64, which unlike Itanium was backward compatible with x86. Intel called it EM64T or IA32e, later Intel 64, Microsoft calls it x64, usually it is called x86-64. Support for this architecture was included in UEFI 2.0 standard. In April 2008, ARM joined Unified EFI Forum, so we expect support these CPUs coming too. latest version of standard is UEFI 2.1, which has few minor changes and features compared to UEFI 2.0. But overall, all versions of standard are very backwards compatible, so software and drivers written for very first version of EFI still run on latest boards.


    UEFI Specifications Update

    The interface defined by the EFI specification includes data tables that contain platform information, and boot and runtime services that are available to the OS loader and OS. UEFI firmware provides several technical advantages:

    ·         Ability to boot from large disks (over 2 TiB)
    ·         Faster boot-up
    ·         CPU-independent architecture
    ·         CPU-independent drivers
    ·         Flexible pre-OS environment, including network capability
    ·         Modular design

    Some existing enhancements to PC BIOS, such as the Advanced Configuration and Power Interface (ACPI) and System Management BIOS (SMBIOS), are also present in EFI, as they do not rely on a 16-bit runtime interface. The Unified EFI Forum is a non-profit collaborative trade organization formed to promote and manage the UEFI standard. As an evolving standard, the UEFI specification is driven by contributions and support from member companies of the UEFI Forum.

    The UEFI Forum board of directors include representatives from the following eleven leading companies:
    ·         AMD
    ·         American Megatrends Inc.
    ·         Apple Computer, Inc.
    ·         Dell
    ·         Hewlett Packard
    ·         IBM
    ·         Insyde
    ·         Intel
    ·         Lenovo
    ·         Microsoft
    ·         Phoenix Technologies


    Monday, 21 October 2013

    Disclaimer

    Disclaimer
    The information and resources of this site are only for the education purpose.Not a replacement of any technical/medical/psychological treatment.I aim to present the most accurate info possible. Some contents of these postings might be copy protected and owned by respective website's owners. Viewers are strongly advised not to reproduce such content without any legal permission/acceptance.

    Thanks for your cooperation and understanding.Enjoy Reading.
    HTPC

    Present day consumers use their PCs for multimedia intensive tasks such as HD video playback. These HTPC tasks are not very power efficient when done using the x86 processor alone. Gamers have remained the main focus of the GPU developers. However, the GPU architecture (coupled with a dedicated video decoder on the same silicon) is quite useful for video playback and post processing also. Home Theater PCs (HTPCs) are becoming more and more popular due to a number of reasons. The desire of consumers to watch and enjoy their media, be it Blu-rays/DVDs or broadcast content, in an independent manner (i.e. not limited by DRM restrictions such as with Tivo recordings or even just optical media) has enabled the HTPC industry to gain a lot of relevance, as opposed to getting tied down with non-upgradeable consumer electronics equipment. All three major vendors (Intel, AMD, and NVIDIA) pay quite a bit of attention to the HTPC market in their products, but it is universally agreed that AMD represents some of the most economical HTPC building blocks targeted towards budget system builders, so that's our focus for today.

    Home Theater PC (HTPC) or Media Center appliance is a convergence device that combines some or all the capabilities of a personal computer with a software application that supports video, photo, music playback, and sometimes video recording functionality. Although computers with some of these capabilities were available from the late 1980s, the "Home Theater PC" term first appeared in mainstream press in 1996. In recent years, other types of consumer electronics, including gaming systems and dedicated media devices have crossed over to manage video and music content. The term "media center" also refers to specialized application software designed to run on standard personal computers.

    Intel used to integrate the GPU into the chipset till the GMA X4500. AMD acquired ATI, a processor with AMD's x86 CPU and ATI's GPU on the same die was hotly expected. The Lynx integrates a number of AMD Stars cores and also an updated Redwood class GPU (called Sumo) into the same die

    HTPC options exist for each of the major operating systems: Microsoft Windows, Mac OS X and Linux. The software is sometimes called "Media Center Software".

    GNU/Linux
    A number of mediacentre solution exist for Linux, MythTV is a fully fledged integrated suite of software which incorporates TV recording, video library, video game library, image/picture gallery, information portal and music collection playback among other capabilities

    Windows
    For Microsoft Windows, a common approach is to install a version that contains the Windows Media Center (Home Premium, Professional or Ultimate for Windows 7, Home Premium or Ultimate for Windows Vista, or the older Windows XP Media Center Edition). Alternative HTPC software may be built with the addition of a third party software PVR to a Windows PC. SageTV and GB-PVR have integrated placeshifting comparable to the Slingbox, allowing client PCs and the Hauppauge MediaMVP to be connected to the server over the network.

    Mac OS X
    Beyond the operating system itself, add-on hardware-plus-software combinations (for adding more full-featured HTPC abilities to any Mac) include Elgato's EYETV series PVRs, AMD's “ATI Wonder" external USB 2.0 TV-tuners, and various individual devices from third-party manufacturers



    It has now been almost a year since the Llano lineup was launched; by integrating a CPU and GPU into the same die and bringing along AMD's expertise in the GPU arena for HTPCs, these APUs (Accelerated Processing Units) offer a lot to the budget HTPC builders.

    The purpose of a HTPC system is to enable one or more of the following activities:

    ·         Media playback: The media could be either stored locally (on a hard drive, NAS, Blu-ray, or DVD) or be streamed from the Internet (from sites such as Netflix or Hulu). Media files include pictures and music files in addition to videos.
    ·         Optical disc backup creation: This involves the archiving of Blu-ray and DVD movies onto a physical disk (such as a hard drive or a NAS) after removing the DRM protection. This enables consumers to enjoy the content on their purchased discs without the annoying trailers and advertisements, or the need for a Blu-ray drive (e.g. on tablets or smaller HTPCs).
    ·         Recording and/or editing video files: This involves using a TV tuner to capture broadcast content and record it onto a physical drive. The recorded content could then be edited to remove commercials or for any other purpose before being stored away. Sometimes, it might be necessary to transcode the video files as well (say, converting from one H.264 profile to another). This is much more computationally intensive compared to splitting/joining media streams with similar characteristics.



    Some users might also want to use their HTPC for activities such as:

    ·         Gaming: This is, by far, the most common extension of a HTPC outside its original application area. Thanks to the powerful integrated GPU, we have seen that the Llano APUs are quite good with almost all games at mainstream quality settings. If a budget gaming+HTPC build is on your radar, you can't go wrong with the Llanos--provided you understand that high quality settings and 1080p gaming are likely too much for the iGPU.
    ·         Network DVR/IP Camera recording: This is quite uncommon, but some users might like to have IP camera feeds viewable/recordable through their HTPCs.
    ·         General PC Tasks: These include basic web browsing, downloading and other similar tasks (which almost all HTPCs are bound to be good with)


    AMD's Llano lineup includes a range of processors with TDP ratings from 65W to 100W. Note that simple playback tasks are going to be quite power-efficient, thanks to integrated hardware decoding, so the relatively high TDPs shouldn't put one off. There are also plenty of FM1 socket motherboards based on the A55/A75 FCHs (Fusion Controller Hubs). The choice of the Llano APU, motherboard form factor, and other components should be made depending on the desired usage scenario

    Friday, 28 June 2013

    OMAP - TI Technology



      
    The wireless market is huge and growing. As demand grows, so do consumer expectations. The wireless revolution is moving rapidly beyond voice to include such sophisticated applications as mobile e-commerce, real-time Internet, speech recognition, audio and full-motion video streaming. As a result, wireless Internet appliances require increasingly complex mobile communications and signal processing capabilities. And while consumers expect state-of-the-art functionality, they continue to demand longer battery life and smaller, sleeker products. To provide these seemingly paradoxical characteristics—processing power for sophisticated applications with no reduction in battery life, wireless Internet appliance OEMs require the highly efficient, power-stingy processing delivered by the Open Multimedia Applications Platform™ (OMAP™ ) architecture from Texas Instruments. OMAP hardware and software can decode data streams, such as MP3 audio and MPEG-4 video, in real time with just a fraction of the power required when using a best-in-class RISC processor.


    Texas Instruments OMAP (Open Multimedia Application Platform) is a category of proprietary system on chips (SoCs) for portable and mobile multimedia applications developed by Texas Instruments. OMAP devices generally include a general-purpose ARM architecture processor core plus one or more specialized co-processors. Earlier OMAP variants commonly featured a variant of the Texas Instruments TMS320 series digital signal processor.


    In addition, the OMAP application environment is fully programmable. This programmability allows wireless device OEMs, independent developers, and carriers to provide downloadable software upgrades as standards change or bugs are found. Since there is no need to develop new ASIC hardware to implement changes, OMAP OEMs can respond to changing market conditions much more quickly than many of their competitors can.

    The OMAP architecture is based on a combination of TI’s state-of-the-art TMS320C55x™ DSP core and high performance ARM925T CPU. A RISC architecture, like ARM925T, is well suited for control type code (Operating System (OS), User Interface, OS applications). A DSP is best suited for signal processing applications, such as MPEG4 video, speech recognition, and audio playback. The OMAP architecture combines two processors to gain maximum benefits from each. Both processors utilize an instruction cache to reduce the average access time to instruction memory and eliminate power hungry external accesses. In addition, both cores have a memory management unit (MMU) for virtual-to-physical memory translation and task-to-task memory protection.
    The OMAP family consists of three product groups classified by performance and intended application:
    §  High-performance applications processors
    §  Basic multimedia applications processors
    §  Integrated modem and applications processors

    Additionally, there are two primary distribution channels - not all parts being available in both channels. The genesis of the OMAP product line is from partnership with cell phone vendors, and the main distribution channel involves sales directly to such wireless handset vendors. Parts developed to suit evolving cell phone requirements are flexible and powerful enough to support sales through less specialized catalog channels; some OMAP 1 parts, and many OMAP 3 parts, have catalog versions with different sales and support models. Parts that are obsolete from the perspective of handset vendors may still be needed to support products developed using catalog parts and distributor-based inventory management.

    Applications including MPEG4, text-to-speech, unified messaging, Internet audio, videoconferencing, video clip playback, and others—require more powerful processors that drain less battery power. They also create dramatic new opportunities for independent software developers who can provide leading-edge applications and features. The OMAP architecture’s parallel combination of DSP and RISC processing provides the flexibility to accommodate applications like these while preserving battery life. The open architecture makes it easy for third-party developers to create these and other wireless multimedia applications not yet even imagined. Technology available from TI today provides the gateway to huge new markets tomorrow.
    OMAP Products:
    Many mobile phones use OMAP SoCs, including the Nokia N90, N91, N92, N95, N82, E61, E62, E63, E90, N800, N810 and N900 Internet tablets, Motorola Droid, Droid X, and Droid 2. The Palm Pre, Pandora, Touch Book also use an OMAP SoC (the OMAP3430). Others to use an OMAP SoC include the Sony Ericsson Satio, the Sony Ericsson Vivaz, the Samsung Omnia HD, Sony Ericsson Idou, the Nook Color and some Archos tablets (such as Archos 80 gen 9 and Archos 101 gen 9).
    The OMAP multiprocessor architecture has been optimized to support heavy multimedia applications, such as video and speech in 3G terminals. Such a complex architecture, combining two heterogeneous processors (RISC and DSP), several OS combinations, and applications running on both the DSP and ARM can be made accessible seamlessly to application developers because of the DSP/BIOS Bridge feature. Moreover, this dual processor architecture is more cost efficient and power efficient than a single processor solution.

    Thursday, 27 June 2013

    Augmented Reality

    Augmented Reality


    Augmented reality (AR) is a field of computer research which deals with the combination of real world and computer generated data.  It is a term for a live direct or indirect view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data. As a result, the technology functions by enhancing one’s current perception of reality. By contrast, virtual reality replaces the real world with a simulated one AR technology includes head-mounted displays and virtual retinal displays for visualization purposes, and construction of controlled environments containing sensors and actuators.


    Video games have been entertaining us for nearly 30 years, ever since Pong was introduced to arcades in the early 1970s.Computer graphics have become much more sophisticated since then, and game graphics are pushing the barriers of photorealism. Now, researchers and engineers are pulling graphics out of your television screen or computer display and integrating them into real-world environments. This new technology, called Augmented Reality, blurs the line between what's real and what's computer-generated by enhancing what we see, hear, feel and smell.



    On the spectrum between virtual reality, which creates immersive, computer-generated environments, and the real world, augmented reality is closer to the real world. Augmented reality adds graphics, sounds, haptic feedback and smell to the natural world as it exists. Both video games and cell phones are driving the development of augmented reality. Everyone from tourists, to soldiers, to someone looking for the closest subway stop can now benefit from the ability to place computer-generated graphics in their field of vision.

    Augmented reality is changing the way we view the world -- or at least the way its users see the world. Picture yourself walking or driving down the street. With augmented-reality displays, which will eventually look much like a normal pair of glasses, informative graphics will appear in your field of view, and audio will coincide with whatever you see. These enhancements will be refreshed continually to reflect the movements of your head. Similar devices and applications already exist, particularly on smart phones like the iPhone.


    Applications:

    Advertising: Usage of AR to promote products via interactive AR applications is becoming common now. For example Nissan(2008 LA Auto Show), Best Buy(2009) and others used webcam based AR to connect 3D models with printed materials. There are numerous examples of connecting mobile AR to outdoor advertising.

    Navigation: AR can augment the effectiveness of navigation devices. For example, building navigation can be enhanced to aid in maintaining industrial plants

    Military and emergency services: Wearable AR can provide information such as instructions, maps, enemy locations, and fire cells.

    Art: AR can help create art in real time integrating reality such as painting, drawing and modeling.

    Entertainment and Education: AR can create virtual objects in museums and exhibitions, theme park attractions, games and books.

    Collaboration: AR can help facilitate collaboration among distributed team members via conferences with real and virtual participants

    Translation: AR systems can provide dynamic subtitles in the user's language



    Possible Enactments:

    Devices: Create new applications that are physically impossible in "real" hardware, such as 3D objects interactively changing their shape and appearance based on the current task or need.

    Multi-screen simulation: Display multiple application windows as virtual monitors in real space and switch among them with gestures and/or redirecting head and eyes. A single pair of glasses could "surround" a user with application windows.



    **The content in this blog is purely for education/knowledge purpose. The information provided here is taken from the web or books or some other sources of information or by personal experience. All this is the process of keeping content equipped with latest info. Patented or protected information is no where mentioned here at any part of this site. If you feel such data is still here please contact us and it will be removed immediately.