The ability to understand and translate languages is a sought after commodity. Modern computers are capable of translation, but require the user to disengage from their environment to operate. This research will show that the capability exists to create a device that would not separate the user with their environment while still allowing them to comprehend foreign languages. The goal of the research is the ability to produce an apparatus which can translate text seamlessly while moving its displayed field of vision in step with the users movements. To achieve this effect three distinct operations must be done quickly. First, input will be taken from the users’ perspective in the form of digital video using an Ovrvision 1 stereoscopic camera, which features a wider field of vision than that of the user allowing for predictive translation. Following this an Android translation algorithm will be applied to this input to filter out words of a language other than that of the user, in order to replace those words with the ones translated. The algorithm should do so in a way that simulates the words appearance in the raw input in order to offer the user an accurate reproduction of the environment. This augmented video will then be returned to the user by means of the Oculus Rift virtual reality system, thereby achieving the desired result of translating all that is in the users’ field of vision without disruption.