StructCode: Leveraging Fabrication Artifacts to Store Data in Laser-Cut Objects

Mustafa Doga Dogan, Vivian Hsinyueh Chan, Richard Qi, Grace Tang, Thijs Roumen Stefanie Mueller.
2023 ACM Symposium on Computational Fabrication (SCF)

We introduce StructCode, a technique to store machine-readable data in laser-cut objects using their fabrication artifacts. StructCode modifies the lengths of laser-cut finger joints and/or living hinges to represent bits of information without introducing additional parts or materials. We demonstrate StructCode through use cases for augmenting laser-cut objects with data such as labels, instructions, and narration. We present and evaluate a tag decoding pipeline that is robust to various backgrounds, viewing angles, and wood types. In our mechanical evaluation, we show that StructCodes preserve the structural integrity of laser-cut objects.

[paper] [video]
Featured on  MIT News logo MIT News.

BrightMarker: 3D Printed Fluorescent Markers for Object Tracking

Mustafa Doga Dogan, Raul Garcia-Martin, Patrick William Haertel, Jamison John O’Keefe, Ahmad Taka, Akarsh Aurora, Raul Sanchez-Reillo, Stefanie Mueller.
2023 ACM Symposium on User Interface Software and Technology (UIST)

Existing invisible object tagging methods are prone to low resolution, which impedes tracking performance. We present BrightMarker, a fabrication method that uses fluorescent filaments to embed easily trackable markers in 3D printed color objects. By using an infrared-fluorescent filament that “shifts” the wavelength of the incident light, our optical detection setup filters out all the noise to only have the markers present in the infrared camera image. The high contrast of the markers allows us to track them robustly regardless of the moving objects’ surface color.

We built a software interface for automatically embedding these markers for the input object geometry, and hardware modules that can be attached to existing mobile devices and AR/VR headsets. Our image processing pipeline robustly localizes the markers in real-time from the captured images.BrightMarker can be used in a variety of applications, such as custom fabricated wearables for motion capture, tangible interfaces for AR/VR, rapid product tracking, and privacy-preserving night vision. BrightMarker exceeds the detection rate of state-of-the-art invisible marking, and even small markers (1″x1″) can be tracked at distances exceeding 2m.

[project page] [doi] [paper] [video
Featured on  MIT News logo MIT News.

StandARone: Infrared-Watermarked Documents as Portable Containers of AR Interaction and Personalization

M. Doga Dogan, Alexa F. Siu , Jennifer Healey, Curtis Wigington, Chang Xiao, Tong Sun
2023 ACM CHI Conference on Human Factors in Computing Systems LBW

Hybrid paper interfaces leverage augmented reality (AR) to combine the desired tangibility of paper documents with the affordances of interactive digital media. Typically, the instructions for how the virtual content should be generated are not an intrinsic part of the document but rather accessed through a link to remote resources. To enable hybrid documents to be portable containers of also the AR content, we introduce StandARone documents. Using our system, a document author can define AR content and embed it invisibly on the document using a standard inkjet printer and infrared-absorbing ink. A document consumer can interact with the embedded content using a smartphone with a NIR camera without requiring a network connection. We demonstrate several use cases of StandARone including personalized offline menus, interactive visualizations, and location-aware packaging.

[doi] [paper] [video] [talk]

InfraredTags: Invisible AR Markers & Barcodes Using Low-Cost, Infrared-Based 3D Printing & Imaging Tools

M. Doga Dogan, Ahmad Taka, Michael Lu, Yunyi Zhu, Akshat Kumar, Aakar Gupta, Stefanie Mueller
2022 ACM CHI Conference on Human Factors in Computing Systems
Best Demo Honorable Mention

Existing approaches for embedding unobtrusive tags inside 3D objects require either complex fabrication or high-cost imaging equipment. We present InfraredTags, which are 2D codes and markers imperceptible to the naked eye that can be 3D printed as part of objects, and detected rapidly by low-cost near-infrared cameras. InfraredTags achieve this by being printed from an infrared-transmitting filament which infrared cameras can see through, and by having air gaps inside for the tag’s bits which infrared cameras capture as darker pixels in the image. We built a user interface that facilitates the integration of common tags (QR codes, ArUco markers) with the object geometry to make them 3D printable as InfraredTags. We also developed a low-cost infrared imaging module that augments existing mobile devices and decodes tags using our image processing pipeline. We demonstrate how our method enables applications, such as object tracking and embedding metadata for augmented reality and tangible interactions.

[project page] [doi] [paper] [video] [talk]
Featured on Popular Science, New Scientist, and  MIT News logo MIT News.

SensiCut: Material-Aware Laser Cutting Using Speckle Sensing and Deep Learning

M. Doga Dogan, Steven Vidal Acevedo Colon, Varnika Sinha, Kaan Akşit, Stefanie Mueller
2021 ACM User Interface Software and Technology Symposium (UIST)

Laser cutter users face difficulties distinguishing between visually similar materials. This can lead to problems, such as using the wrong power/speed settings or accidentally cutting hazardous materials. To support users in identifying the sheets, we present SensiCut, a material sensing platform for laser cutters. In contrast to approaches that detect the appearance of the material with a conventional camera, SensiCut identifies the material by its surface structure using speckle sensing and deep learning. SensiCut comes with a compact hardware add-on for the laser cutter and a user interface that integrates material sensing into the cutting workflow. In addition to improving the traditional workflow, SensiCut enables new applications, such as automatically partitioning the design when engraving on multi-material objects or adjusting the shape of the design based on the kerf of the identified material. We evaluate SensiCut’s accuracy for different types of materials under different conditions, such as with various illuminations and sheet orientations.

[project page] [doi] [paper] [video] [talk]
Featured on The Next Web logo The Next Web, Photonics.com logo Photonics.com, and  MIT News logo MIT News.

G-ID: Identifying 3D Prints Using Slicing Parameters

M. Doga Dogan, Faraz Faruqi, Andrew Day Churchill, Kenneth Friedman, Leon Cheng, Sriram Subramanian, Stefanie Mueller
2020 ACM CHI Conference on Human Factors in Computing Systems

G-ID is a method that utilizes the subtle patterns left by the 3D printing process to distinguish and identify objects that otherwise look similar to the human eye. The key idea is to mark different instances of a 3D model by varying slicing parameters that do not change the model geometry but can be detected as machine-readable differences in the print. As a result, G-ID does not add anything to the object but exploits the patterns appearing as a byproduct of slicing, an essential step of the 3D printing pipeline. We introduce the G-ID slicing & labeling interface that varies the settings for each instance, and the G-ID mobile app, which uses image processing techniques to retrieve the parameters and their associated labels from a photo of the 3D printed object. Finally, we evaluate our method’s accuracy under different lighting conditions, when objects were printed with different filaments and printers, and with pictures taken from various positions and angles.

[project page] [doi] [paper] [video] [talk]
Featured on  3DPrintCom 3DPrint.com, hackster.io logo Hackster.ioand ITMediaNews ITmedia (Japanese). 

DefeXtiles: 3D Printing Quasi-Woven Fabric via Under-Extrusion

Jack Forman, Mustafa Doga Dogan, Hamilton Forsythe, Hiroshi Ishii
2020 ACM User Interface Software and Technology Symposium (UIST)
Best Demo Honorable Mention

We present DefeXtiles, a rapid and low-cost technique to produce tulle-like fabrics on unmodified fused deposition modeling (FDM) printers. The under-extrusion of filament is a common cause of print failure, resulting in objects with periodic gap defects. In this paper, we demonstrate that these defects can be finely controlled to quickly print thinner, more flexible textiles than previous approaches allow. Our approach allows hierarchical control from micrometer structure to decameter form and is compatible with all common 3D printing materials. In this paper, we introduce the mechanism of DefeXtiles and establish the design space through a set of primitives with detailed workflows. We demonstrate the interactive features and new use cases of our approach through a variety of applications, such as fashion design prototyping, interactive objects, aesthetic patterning, and single-print actuators.

[project page] [doi] [paper] [video] [talk]
Featured on gizmodo.com logo Gizmodo and MIT News logo MIT News.

FoldTronics: Creating 3D Objects with Integrated Electronics Using Foldable Honeycomb Structures

Junichi Yamaoka, Mustafa Doga Dogan, Katarina Bulovic, Kazuya Saito, Yoshihiro Kawahara, Yasuaki Kakehi, Stefanie Mueller
2019 ACM CHI Conference on Human Factors in Computing Systems

FoldTronics is a 2D-cutting based fabrication technique to integrate electronics into 3D folded objects. The key idea is to cut and perforate a 2D sheet to make it foldable into a honeycomb structure using a cutting plotter; before folding the sheet into a 3D structure, users place the electronic components and circuitry onto the sheet. The fabrication process only takes a few minutes enabling users to rapidly prototype functional interactive devices. The resulting objects are lightweight and rigid, thus allowing for weight-sensitive and force-sensitive applications. Finally, due to the nature of the honeycomb structure, the objects can be folded flat along one axis and thus can be efficiently transported in this compact form factor. We describe the structure of the foldable sheet, and present a design tool that enables users to quickly prototype the desired objects. We showcase a range of examples made with our design tool, including objects with integrated sensors and display elements.

[project page] [doi] [paper] [video] [talk]
Featured on hackster.io logo Hackster.io.

Magnetically Actuated Soft Capsule Endoscope for Fine-Needle Aspiration

Donghoon Son, Mustafa Doga Dogan, Metin Sitti
2017 IEEE International Conference on Robotics and Automation (ICRA)
Max Planck Institute for Intelligent Systems
Best Medical Robotics Paper Award Nomination

This paper presents a magnetically actuated soft capsule endoscope for fine-needle aspiration biopsy (B-MASCE) in the upper gastrointestinal tract. A thin and hollow needle is attached to the capsule, which can penetrate deeply into tissues to obtain subsurface biopsy sample. The design utilizes a soft elastomer body as a compliant mechanism to guide the needle. An internal permanent magnet provides a means for both actuation and tracking. The capsule is designed to roll towards its target and then deploy the biopsy needle in a precise location selected as the target area. B-MASCE is controlled by multiple custom-designed electromagnets while its position and orientation are tracked by a magnetic sensor array.

[doi] [pdf] [video]
Featured on Engadget Engadget and IEEE SpectrumIEEE Spectrum.