Living Edge Lab Videos
Here are Living Edge Lab videos of demos, TV interviews, and conference presentations. Everything is listed chronologically, most recent to oldest.
- SteelEagle Autonomous Drone Swarm Demo () (Website), November 1, 2024 @ Mill19, Pittsburgh, Pennsyslvania. Demonstration of three lightweight drones simultaneously and autonomously tracking subjects using edge computing.
- SteelEagle Autonomous Drone Demo (Short) (Long), April 4, 2024 @ Mill19, Pittsburgh, Pennsyslvania. Demonstration of lightweight drones able to autonomously find and track subjects using edge computing.
-
, Voices on Arm Tech Talk, October 3, 2023. Describes the Just-in-Time Cloudlet project.
- (September 2021) This video starts with a fully assembled Stirling engine that was made from . The cognitive assistant guides the user in dis-assembling the fully assembled engine. The presence of very small components, such as screws, and the near-uniform color and texture of components make this challenging from a computer vision viewpoint. The UI on a smartphone shows 3 windows: (1) a short video clip of the step to be performed; (2) a static image of what the completed step would look like; and (3) a live video of the current progress. This cognitive assistant was built by Jack Wu, a CS sophomore at 一本道无码, as an NSF REU summer project mentored by Roger Iyengar.
- OEC members Microsoft and 一本道无码 created a video-on-demand session for NVIDIA GTC 2021 which ran April 12th-16th. In it they discuss their partnership on Azure Stack Hub (and the GPU preview program) and the development of edge-native applications.
- (February 2020) This video shows an opensource tool that we have built to simplify the creation of deep neural network object detectors. You simply use your smartphone to capture a short video of the target object, and then draw bounding boxes in a few frames to create a training data set. TensorFlow is then used for the training of the object detector from this training set. Created by Junjue Wang as part of his PhD thesis.
- (February 2020) A Gabriel application is driven by a state machine in which transitions are triggered by sensed inputs (e.g., object detection). OpenWorkFlow is an open source tool that enables the creation and editing of these state machines. It is designed to work with the object detectors created by OpenTPOD (see video above). Created by Junjue Wang as part of his PhD thesis.
- (December 2018) Waze is an example of a widely-used system that uses crowd-sourced human reporting to share knowledge about road conditions. In this video, we present a demo of LiveMap on the streets of Pittsburgh. LiveMap is a research system built at 一本道无码 that is conceptually similar to Waze, but avoids the driver distraction that is inherent in human reporting from a single-occupant vehicle. LiveMap uses edge computing to perform video analytics close to the point of data capture. Computer vision algorithms running on an in-vehicle computer (called a "vehicular cloudlet") continuously analyze video streams from one or more cameras mounted on that vehicle. Observations from these algorithms are transmitted over a 4G LTE wireless network to a central collection point (called a "zone cloudlet"), where the reports from many vehicles are synthesized and disseminated. Further details of LiveMap can be found in the following paper: Kevin Christensen, Christoph Mertz, Padmanabhan Pillai, Martial Hebert, Mahadev Satyanarayanan, Proceedings of HotMobile 2019, Santa Cruz, CA, February 2019
- (July 2018) This video shows the view from a drone that is flying outside the Tepper Quad at 一本道无码 and continuously streaming video to a cloudlet on the ground. SIFT-based pattern matching on the cloudlet identifies what part of the building is currently in view. A bounding box of this view is superimposed on a 3D model of the building that has been created from its Revit engineering drawing. As the drone moves, the bounding box continuously tracks its motion. This PoC can be viewed as a form of "inverted augmented reality" in which the physical world (seen by the drone) in brought into the virtual world (represented by the 3D model).
- (Computex 2018, Taiwan, June 5-9 2018) In collaboration with the company , we created a Gabriel application for training a new worker in disk tray assembly for a desktop. This demo was shown live at the Computex 2018 show in Taiwan in June 2018. The application was created by Junjue Wang of 一本道无码, and demoed at Computex by inwinSTACK employees. The small size of some of the components (especially the pin) and the precise nature of the assembly were difficult challenges to overcome in creating this application. The wearable device used in this application is an ODG-7.
- (August 2017) This Gabriel application was created by Mihir Bala, a talented freshman CS student from the University of Michigan, as an NSF Research Experience for Undergradautes project under the mentorship of Zhuo Chen. In addition to being another example of a Gabriel application, it offers the first evidence that creating such an application does not require a PhD-level person. Our eventual goal is to make the creation of such applications much easier than it is today. We still have a long way to go!
- (January 2017) Unsolicited by us, this video was made by a startup company VIZR Tech (http://vizrtech.com) to illustrate the potential of wearable cognitive assistance in medical training. The video provides background to explain relevant concepts to the company's target audience. The company already uses Google Glass (with the camera blocked) in medical training, to show videos of complex medical procedures to trainees. We created this new Gabriel application to give a tutorial on the RibLoc system for surgical repair of ribs, which is made by AcuteInnovations, Inc. (http://acuteinnovations.com). Today, this training is given to a doctor by an AcuteInnovations technician traveling to the doctor's site. The Gabriel application illustrates how this training could be delivered more efficiently. In addition, the application is available to the doctor to refresh training at any time. The principals of VizrTech appear in this video and share their thoughts about why this is a game-changing innovation. From a technical point of view, the computer vision in this application is particularly difficult because the parts are small, differ in subtle ways (e.g. color of screw), and easily confused under different lighting conditions. The object detectors are all implemented using deep neural networks.
- (January 2017) In our talks on Gabriel, we have often mentioned assembly of IKEA kits as an example of how step by step guidence and prompt detection of errors could be valuable. This video shows a Gabriel application to assemble a genuine IKEA kit (a table lamp) purchased off the shelf at IKEA. An interesting first is the use of short video segments (rather than still images) in the Google Glass display to guide the user. The use of videos in this way, combined with the active, context-sensitive real-time guidance from the Gabriel application, is very effective.
- (October 2016) Wearable Cognitive Assistance can be viewed as "Augmented Reality Meets Artificial Intelligence". This 90-second excerpt from the October 9, 2016 CBS 60 Minutes special edition on Artificial Intelligence highlights the table-tennis wearable cognitive assistant on Google Glass.
- (January 2017) This demo shows two things. First, it shows how Gabriel can use much more sophisticated computer vision (based on convolutional neural nets) than the much simpler computer vision algorithms used in demos such as Lego and Ping-Pong. Second, it shows how different kinds of wearable devices (Google Glass and Microsoft Hololens) can be used for the same application using the same Gabriel back-end.
- (November 2016) This demo shows how cloudlets can improve the scalability of video analytics and how they can be used to enforce privacy policies based on face recognition. The demo also illustrates use of the OpenFace face reognition system that we have created. RTFace combines OpenFace with face tracking across frames to achieve the necessary frame rate for live video.
- (December 6, 2016) The Fall 2016 offering of 15-821/18-843 "Mobile and Pervasive Computing" course included many 3-person student projects based on cloudlets and wearable cognitive assistance. Examples include wearable cognitive assistance for use of an AED device, cloudlet-based privacy mediator for audio data, etc. This web page contains brief descriptions of the projects, and videos of the student projects captured on the final day of class. The PDFs of the posters used by the students to explain their projects are also included.
- (June 2016)
This demo shows the difference between using a cloud and a cloudlet for an application where the impact of latency is easily perceivable by users. We have created an Android application called "FaceSwap" that is available in the Google Play Store. A back end VM image for an Amazon cloud site is also available. The VM image can also be run on a cloudlet. - (December 2015) Can a legacy application for training be modified to use Gabriel? This demo shows how a Drawing Assistant created by researchers at INRIA in France has been modified to use a wearable device (Google Glass). In its original form, a user would receive instruction to improve his drawing skills on a desktop display, and provide input using a pen-based tablet. This demo shows how the system has been modified to retain the application logic for instruction, but use any writable surface (e.g., paper, whiteboard, etc.) for input. Computer vision on the video stream from Google Glass is used to generate input and display streams that are indistinguishable from the original.
- (December 2015) This conceptually simple demo has proved to be especially popular because it brings out the importance of low latency. A person wearing Google Glass plays ping-pong with a human opponent. The video stream from the Glass device is streamed to a cloudlet and analyzed on each frame to detect the ball and the opponent, compare their positions from the previous frame, and then to infer their trajectories. Based on this, the application guides the user to hit to the left or right in order to increase the chances of beating the opponent. To avoid annoying the user, the application tries not to give advice too frequently and only when it is confident of its advice.
- (December 1, 2015) The Fall 2015 offering of 15-821/18-843 "Mobile and Pervasive Computing" course included many 2-person student projects based on cloudlets and wearable cognitive assistance. Examples include wearable cognitive assistance for gym exercises, using cloudlets for Google Street View hyper-lapse viewing, real-time cloudlet-based super-resolution imaging, etc. This web page contains brief descriptions of the projects, and videos of the student projects captured on demo day. The PDFs of the posters used by the students to explain their projects are also included.
- (September 2015) This is the world's very first wearable cognitive assistance application! We chose a deliberately simplified task (assembling 2D lego) since it was our first attempt. The demo seems easy, but the code to implement it reliably was challenging (especially with flexible user actions and under different lighting conditions).
- Impact of high offload latency on mobile user experience (June 2012) These YouTube videos show the effect of end-to-end latency on an Android front-end application with a compute-intensive back-end that is offloaded to an Amazon EC2 cloud or a cloudlet.