Augmented Object Intelligence with XR-Objects
Mustafa Doga Dogan, Eric J. Gonzalez, Karan Ahuja, Ruofei Du, Andrea Colaco, Johnny Lee, Mar Gonzalez-Franco, David Kim.
2024 ACM Symposium on User Interface Software and Technology (UIST)
Seamless integration of physical objects as interactive digital entities remains a challenge for spatial computing. This paper explores Augmented Object Intelligence (AOI) in the context of XR, an interaction paradigm that aims to blur the lines between digital and physical by equipping real-world objects with the ability to interact as if they were digital, where every object has the potential to serve as a portal to digital functionalities. Our approach utilizes real-time object segmentation and classification, combined with the power of Multimodal Large Language Models (MLLMs), to facilitate these interactions without the need for object pre-registration. We implement the AOI concept in the form of XR-Objects, an open-source prototype system that provides a platform for users to engage with their physical environment in contextually relevant ways using object-based context menus. This system enables analog objects to not only convey information but also to initiate digital actions, such as querying for details or executing tasks. Our contributions are threefold: (1) we define the AOI concept and detail its advantages over traditional AI assistants, (2) detail the XR-Objects system’s open-source design and implementation, and (3) show its versatility through various use cases and a user study.
[project page] [Google Research page] [paper] [video] [GitHub]
Speed-Modulated Ironing: High-Resolution Shade and Texture Gradients in Single-Material 3D Printing
Mehmet Ozdemir, Marwa AlAlawi, Mustafa Doga Dogan, Jose Castro, Stefanie Mueller, Zjenja Doubrovski.
2024 ACM Symposium on User Interface Software and Technology (UIST)
We present Speed-Modulated Ironing, a new fabrication method for programming visual and tactile properties in single-material 3D printing. We use one nozzle to 3D print and a second nozzle to reheat printed areas at varying speeds, controlling the material’s temperature-response. The rapid adjustments of speed allow for fine-grained reheating, enabling high-resolution color and texture variations. We implemented our method in a tool that allows users to assign desired properties to 3D models and creates corresponding 3D printing instructions. We demonstrate our method with three temperature-responsive materials: a foaming filament, a filament with wood fibers, and a filament with cork particles. These filaments respond to temperature by changing color, roughness, transparency, and gloss. Our technical evaluation reveals the capabilities of our method in achieving sufficient resolution and color shade range that allows surface details such as small text, photos, and QR codes on 3D-printed objects. Finally, we provide application examples demonstrating the new design capabilities enabled by Speed-Modulated Ironing.
[project page] [paper] [video] [GitHub]
MoiréWidgets: High-Precision, Passive Tangible Interfaces via Moiré Effect
Daniel Campos Zamora, Mustafa Doga Dogan, Alexa Siu, Eunyee Koh, Chang Xiao.
2024 ACM CHI Conference on Human Factors in Computing Systems
We introduce MoiréWidgets, a novel approach for tangible interaction that harnesses the Moiré effect—a prevalent optical phenomenon—to enable high-precision event detection on physical widgets. Unlike other electronics-free tangible user interfaces which require close coupling with external hardware, MoiréWidgets can be used at greater distances while maintaining high-resolution sensing of interactions. We define a set of interaction primitives, e.g., buttons, sliders, and dials, which can be used as standalone objects or combined to build complex physical controls. These consist of 3D printed structural mechanisms with patterns printed on two layers—one on paper and the other on a plastic transparency sheet—which create a visual signal that amplifies subtle movements, enabling the detection of user inputs. Our technical evaluation shows that our method outperforms standard fiducial markers and maintains sub-millimeter accuracy at 100 cm distance and wide viewing angles. We demonstrate our approach by creating an audio console and indicate how our approach could extend to other domains.
[project page] [doi] [paper] [video] [GitHub]
BrightMarker: 3D Printed Fluorescent Markers for Object Tracking
Mustafa Doga Dogan, Raul Garcia-Martin, Patrick William Haertel, Jamison John O’Keefe, Ahmad Taka, Akarsh Aurora, Raul Sanchez-Reillo, Stefanie Mueller.
2023 ACM Symposium on User Interface Software and Technology (UIST)
Existing invisible object tagging methods are prone to low resolution, which impedes tracking performance. We present BrightMarker, a fabrication method that uses fluorescent filaments to embed easily trackable markers in 3D printed color objects. By using an infrared-fluorescent filament that “shifts” the wavelength of the incident light, our optical detection setup filters out all the noise to only have the markers present in the infrared camera image. The high contrast of the markers allows us to track them robustly regardless of the moving objects’ surface color.
We built a software interface for automatically embedding these markers for the input object geometry, and hardware modules that can be attached to existing mobile devices and AR/VR headsets. Our image processing pipeline robustly localizes the markers in real-time from the captured images.BrightMarker can be used in a variety of applications, such as custom fabricated wearables for motion capture, tangible interfaces for AR/VR, rapid product tracking, and privacy-preserving night vision. BrightMarker exceeds the detection rate of state-of-the-art invisible marking, and even small markers (1″x1″) can be tracked at distances exceeding 2m.
[project page] [doi] [paper] [video] [GitHub]
Featured on MIT News, Hackster.io, Hackaday, and SwissCognitive.
StructCode: Leveraging Fabrication Artifacts to Store Data in Laser-Cut Objects
Mustafa Doga Dogan, Vivian Hsinyueh Chan, Richard Qi, Grace Tang, Thijs Roumen Stefanie Mueller.
2023 ACM Symposium on Computational Fabrication (SCF)
We introduce StructCode, a technique to store machine-readable data in laser-cut objects using their fabrication artifacts. StructCode modifies the lengths of laser-cut finger joints and/or living hinges to represent bits of information without introducing additional parts or materials. We demonstrate StructCode through use cases for augmenting laser-cut objects with data such as labels, instructions, and narration. We present and evaluate a tag decoding pipeline that is robust to various backgrounds, viewing angles, and wood types. In our mechanical evaluation, we show that StructCodes preserve the structural integrity of laser-cut objects.
[paper] [video]
Featured on MIT News and Hackster.io.
StandARone: Infrared-Watermarked Documents as Portable Containers of AR Interaction and Personalization
M. Doga Dogan, Alexa F. Siu , Jennifer Healey, Curtis Wigington, Chang Xiao, Tong Sun
2023 ACM CHI Conference on Human Factors in Computing Systems LBW
Hybrid paper interfaces leverage augmented reality (AR) to combine the desired tangibility of paper documents with the affordances of interactive digital media. Typically, the instructions for how the virtual content should be generated are not an intrinsic part of the document but rather accessed through a link to remote resources. To enable hybrid documents to be portable containers of also the AR content, we introduce StandARone documents. Using our system, a document author can define AR content and embed it invisibly on the document using a standard inkjet printer and infrared-absorbing ink. A document consumer can interact with the embedded content using a smartphone with a NIR camera without requiring a network connection. We demonstrate several use cases of StandARone including personalized offline menus, interactive visualizations, and location-aware packaging.
[doi] [paper] [video] [talk]
InfraredTags: Invisible AR Markers & Barcodes Using Low-Cost, Infrared-Based 3D Printing & Imaging Tools
M. Doga Dogan, Ahmad Taka, Michael Lu, Yunyi Zhu, Akshat Kumar, Aakar Gupta, Stefanie Mueller
2022 ACM CHI Conference on Human Factors in Computing Systems
Best Demo Honorable Mention
Existing approaches for embedding unobtrusive tags inside 3D objects require either complex fabrication or high-cost imaging equipment. We present InfraredTags, which are 2D codes and markers imperceptible to the naked eye that can be 3D printed as part of objects, and detected rapidly by low-cost near-infrared cameras. InfraredTags achieve this by being printed from an infrared-transmitting filament which infrared cameras can see through, and by having air gaps inside for the tag’s bits which infrared cameras capture as darker pixels in the image. We built a user interface that facilitates the integration of common tags (QR codes, ArUco markers) with the object geometry to make them 3D printable as InfraredTags. We also developed a low-cost infrared imaging module that augments existing mobile devices and decodes tags using our image processing pipeline. We demonstrate how our method enables applications, such as object tracking and embedding metadata for augmented reality and tangible interactions.
[project page] [doi] [paper] [video] [talk] [GitHub]
Featured on Popular Science, New Scientist, and MIT News.
SensiCut: Material-Aware Laser Cutting Using Speckle Sensing and Deep Learning
M. Doga Dogan, Steven Vidal Acevedo Colon, Varnika Sinha, Kaan Akşit, Stefanie Mueller
2021 ACM User Interface Software and Technology Symposium (UIST)
Laser cutter users face difficulties distinguishing between visually similar materials. This can lead to problems, such as using the wrong power/speed settings or accidentally cutting hazardous materials. To support users in identifying the sheets, we present SensiCut, a material sensing platform for laser cutters. In contrast to approaches that detect the appearance of the material with a conventional camera, SensiCut identifies the material by its surface structure using speckle sensing and deep learning. SensiCut comes with a compact hardware add-on for the laser cutter and a user interface that integrates material sensing into the cutting workflow. In addition to improving the traditional workflow, SensiCut enables new applications, such as automatically partitioning the design when engraving on multi-material objects or adjusting the shape of the design based on the kerf of the identified material. We evaluate SensiCut’s accuracy for different types of materials under different conditions, such as with various illuminations and sheet orientations.
[project page] [doi] [paper] [video] [talk]
Featured on The Next Web, Photonics.com, and MIT News.
G-ID: Identifying 3D Prints Using Slicing Parameters
M. Doga Dogan, Faraz Faruqi, Andrew Day Churchill, Kenneth Friedman, Leon Cheng, Sriram Subramanian, Stefanie Mueller
2020 ACM CHI Conference on Human Factors in Computing Systems
G-ID is a method that utilizes the subtle patterns left by the 3D printing process to distinguish and identify objects that otherwise look similar to the human eye. The key idea is to mark different instances of a 3D model by varying slicing parameters that do not change the model geometry but can be detected as machine-readable differences in the print. As a result, G-ID does not add anything to the object but exploits the patterns appearing as a byproduct of slicing, an essential step of the 3D printing pipeline. We introduce the G-ID slicing & labeling interface that varies the settings for each instance, and the G-ID mobile app, which uses image processing techniques to retrieve the parameters and their associated labels from a photo of the 3D printed object. Finally, we evaluate our method’s accuracy under different lighting conditions, when objects were printed with different filaments and printers, and with pictures taken from various positions and angles.
[project page] [doi] [paper] [video] [talk]
Featured on 3DPrint.com, Hackster.io, and ITmedia (Japanese).
DefeXtiles: 3D Printing Quasi-Woven Fabric via Under-Extrusion
Jack Forman, Mustafa Doga Dogan, Hamilton Forsythe, Hiroshi Ishii
2020 ACM User Interface Software and Technology Symposium (UIST)
Best Demo Honorable Mention
We present DefeXtiles, a rapid and low-cost technique to produce tulle-like fabrics on unmodified fused deposition modeling (FDM) printers. The under-extrusion of filament is a common cause of print failure, resulting in objects with periodic gap defects. In this paper, we demonstrate that these defects can be finely controlled to quickly print thinner, more flexible textiles than previous approaches allow. Our approach allows hierarchical control from micrometer structure to decameter form and is compatible with all common 3D printing materials. In this paper, we introduce the mechanism of DefeXtiles and establish the design space through a set of primitives with detailed workflows. We demonstrate the interactive features and new use cases of our approach through a variety of applications, such as fashion design prototyping, interactive objects, aesthetic patterning, and single-print actuators.
[project page] [doi] [paper] [video] [talk]
Featured on Gizmodo and MIT News.
FoldTronics: Creating 3D Objects with Integrated Electronics Using Foldable Honeycomb Structures
Junichi Yamaoka, Mustafa Doga Dogan, Katarina Bulovic, Kazuya Saito, Yoshihiro Kawahara, Yasuaki Kakehi, Stefanie Mueller
2019 ACM CHI Conference on Human Factors in Computing Systems
FoldTronics is a 2D-cutting based fabrication technique to integrate electronics into 3D folded objects. The key idea is to cut and perforate a 2D sheet to make it foldable into a honeycomb structure using a cutting plotter; before folding the sheet into a 3D structure, users place the electronic components and circuitry onto the sheet. The fabrication process only takes a few minutes enabling users to rapidly prototype functional interactive devices. The resulting objects are lightweight and rigid, thus allowing for weight-sensitive and force-sensitive applications. Finally, due to the nature of the honeycomb structure, the objects can be folded flat along one axis and thus can be efficiently transported in this compact form factor. We describe the structure of the foldable sheet, and present a design tool that enables users to quickly prototype the desired objects. We showcase a range of examples made with our design tool, including objects with integrated sensors and display elements.
[project page] [doi] [paper] [video] [talk]
Featured on Hackster.io.
Magnetically Actuated Soft Capsule Endoscope for Fine-Needle Aspiration
Donghoon Son, Mustafa Doga Dogan, Metin Sitti
2017 IEEE International Conference on Robotics and Automation (ICRA)
Max Planck Institute for Intelligent Systems
Best Medical Robotics Paper Award Nomination
This paper presents a magnetically actuated soft capsule endoscope for fine-needle aspiration biopsy (B-MASCE) in the upper gastrointestinal tract. A thin and hollow needle is attached to the capsule, which can penetrate deeply into tissues to obtain subsurface biopsy sample. The design utilizes a soft elastomer body as a compliant mechanism to guide the needle. An internal permanent magnet provides a means for both actuation and tracking. The capsule is designed to roll towards its target and then deploy the biopsy needle in a precise location selected as the target area. B-MASCE is controlled by multiple custom-designed electromagnets while its position and orientation are tracked by a magnetic sensor array.
[doi] [pdf] [video]
Featured on Engadget and IEEE Spectrum.