Mustafa Doğa Doğan, PhD


I am a researcher and part-time lecturer in the Computer Engineering Dept. at Bogazici University. I teach CMPE 58K Engineering Interactive Systems, and work on INVERSE, an EU Horizon project.

I received my PhD from the Dept. of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology (MIT). I worked with Prof. Stefanie Mueller in the Computer Science & Artificial Intelligence Laboratory (CSAIL). I am a 2021 Adobe Research Fellow and a 2020 Siebel Scholar.

Google Scholar  ·  LinkedIn  ·  Twitter  ·  MIT profile  

Email: doga [at] {,,,}

In my research, I develop novel tagging mechanisms to embed markers and metadata in real-world objects that allow users to seamlessly interact with them in augmented reality (AR) and ubiquitous computing. I use digital fabrication, computer vision, and machine learning methods to achieve my human-computer interaction (HCI) vision.


MoiréWidgets: High-Precision, Passive Tangible Interfaces via Moiré Effect

Daniel Campos Zamora, Mustafa Doga Dogan, Alexa Siu, Eunyee Koh, Chang Xiao.
2024 ACM CHI Conference on Human Factors in Computing Systems

We introduce MoiréWidgets, a novel approach for tangible interaction that harnesses the Moiré effect—a prevalent optical phenomenon—to enable high-precision event detection on physical widgets. Unlike other electronics-free tangible user interfaces which require close coupling with external hardware, MoiréWidgets can be used at greater distances while maintaining high-resolution sensing of interactions. We define a set of interaction primitives, e.g., buttons, sliders, and dials, which can be used as standalone objects or combined to build complex physical controls. These consist of 3D printed structural mechanisms with patterns printed on two layers—one on paper and the other on a plastic transparency sheet—which create a visual signal that amplifies subtle movements, enabling the detection of user inputs. Our technical evaluation shows that our method outperforms standard fiducial markers and maintains sub-millimeter accuracy at 100 cm distance and wide viewing angles. We demonstrate our approach by creating an audio console and indicate how our approach could extend to other domains.

[project page] [doi] [paper] [video]

BrightMarker: 3D Printed Fluorescent Markers for Object Tracking

Mustafa Doga Dogan, Raul Garcia-Martin, Patrick William Haertel, Jamison John O’Keefe, Ahmad Taka, Akarsh Aurora, Raul Sanchez-Reillo, Stefanie Mueller.
2023 ACM Symposium on User Interface Software and Technology (UIST)

Existing invisible object tagging methods are prone to low resolution, which impedes tracking performance. We present BrightMarker, a fabrication method that uses fluorescent filaments to embed easily trackable markers in 3D printed color objects. By using an infrared-fluorescent filament that “shifts” the wavelength of the incident light, our optical detection setup filters out all the noise to only have the markers present in the infrared camera image. The high contrast of the markers allows us to track them robustly regardless of the moving objects’ surface color.

We built a software interface for automatically embedding these markers for the input object geometry, and hardware modules that can be attached to existing mobile devices and AR/VR headsets. Our image processing pipeline robustly localizes the markers in real-time from the captured images.BrightMarker can be used in a variety of applications, such as custom fabricated wearables for motion capture, tangible interfaces for AR/VR, rapid product tracking, and privacy-preserving night vision. BrightMarker exceeds the detection rate of state-of-the-art invisible marking, and even small markers (1″x1″) can be tracked at distances exceeding 2m.

[project page] [doi] [paper] [video
Featured on  MIT News logo MIT News, logo  Hackster.ioHackaday, and SwissCognitive.

StructCode: Leveraging Fabrication Artifacts to Store Data in Laser-Cut Objects

Mustafa Doga Dogan, Vivian Hsinyueh Chan, Richard Qi, Grace Tang, Thijs Roumen Stefanie Mueller.
2023 ACM Symposium on Computational Fabrication (SCF)

We introduce StructCode, a technique to store machine-readable data in laser-cut objects using their fabrication artifacts. StructCode modifies the lengths of laser-cut finger joints and/or living hinges to represent bits of information without introducing additional parts or materials. We demonstrate StructCode through use cases for augmenting laser-cut objects with data such as labels, instructions, and narration. We present and evaluate a tag decoding pipeline that is robust to various backgrounds, viewing angles, and wood types. In our mechanical evaluation, we show that StructCodes preserve the structural integrity of laser-cut objects.

[paper] [video]
Featured on  MIT News logo MIT News and logo

StandARone: Infrared-Watermarked Documents as Portable Containers of AR Interaction and Personalization

M. Doga Dogan, Alexa F. Siu , Jennifer Healey, Curtis Wigington, Chang Xiao, Tong Sun
2023 ACM CHI Conference on Human Factors in Computing Systems LBW

Hybrid paper interfaces leverage augmented reality (AR) to combine the desired tangibility of paper documents with the affordances of interactive digital media. Typically, the instructions for how the virtual content should be generated are not an intrinsic part of the document but rather accessed through a link to remote resources. To enable hybrid documents to be portable containers of also the AR content, we introduce StandARone documents. Using our system, a document author can define AR content and embed it invisibly on the document using a standard inkjet printer and infrared-absorbing ink. A document consumer can interact with the embedded content using a smartphone with a NIR camera without requiring a network connection. We demonstrate several use cases of StandARone including personalized offline menus, interactive visualizations, and location-aware packaging.

[doi] [paper] [video] [talk]

InfraredTags: Invisible AR Markers & Barcodes Using Low-Cost, Infrared-Based 3D Printing & Imaging Tools

M. Doga Dogan, Ahmad Taka, Michael Lu, Yunyi Zhu, Akshat Kumar, Aakar Gupta, Stefanie Mueller
2022 ACM CHI Conference on Human Factors in Computing Systems
Best Demo Honorable Mention

Existing approaches for embedding unobtrusive tags inside 3D objects require either complex fabrication or high-cost imaging equipment. We present InfraredTags, which are 2D codes and markers imperceptible to the naked eye that can be 3D printed as part of objects, and detected rapidly by low-cost near-infrared cameras. InfraredTags achieve this by being printed from an infrared-transmitting filament which infrared cameras can see through, and by having air gaps inside for the tag’s bits which infrared cameras capture as darker pixels in the image. We built a user interface that facilitates the integration of common tags (QR codes, ArUco markers) with the object geometry to make them 3D printable as InfraredTags. We also developed a low-cost infrared imaging module that augments existing mobile devices and decodes tags using our image processing pipeline. We demonstrate how our method enables applications, such as object tracking and embedding metadata for augmented reality and tangible interactions.

[project page] [doi] [paper] [video] [talk]
Featured on Popular Science, New Scientist, and  MIT News logo MIT News.

SensiCut: Material-Aware Laser Cutting Using Speckle Sensing and Deep Learning

M. Doga Dogan, Steven Vidal Acevedo Colon, Varnika Sinha, Kaan Akşit, Stefanie Mueller
2021 ACM User Interface Software and Technology Symposium (UIST)

Laser cutter users face difficulties distinguishing between visually similar materials. This can lead to problems, such as using the wrong power/speed settings or accidentally cutting hazardous materials. To support users in identifying the sheets, we present SensiCut, a material sensing platform for laser cutters. In contrast to approaches that detect the appearance of the material with a conventional camera, SensiCut identifies the material by its surface structure using speckle sensing and deep learning. SensiCut comes with a compact hardware add-on for the laser cutter and a user interface that integrates material sensing into the cutting workflow. In addition to improving the traditional workflow, SensiCut enables new applications, such as automatically partitioning the design when engraving on multi-material objects or adjusting the shape of the design based on the kerf of the identified material. We evaluate SensiCut’s accuracy for different types of materials under different conditions, such as with various illuminations and sheet orientations.

[project page] [doi] [paper] [video] [talk]
Featured on The Next Web logo The Next Web, logo, and  MIT News logo MIT News.

G-ID: Identifying 3D Prints Using Slicing Parameters

M. Doga Dogan, Faraz Faruqi, Andrew Day Churchill, Kenneth Friedman, Leon Cheng, Sriram Subramanian, Stefanie Mueller
2020 ACM CHI Conference on Human Factors in Computing Systems

G-ID is a method that utilizes the subtle patterns left by the 3D printing process to distinguish and identify objects that otherwise look similar to the human eye. The key idea is to mark different instances of a 3D model by varying slicing parameters that do not change the model geometry but can be detected as machine-readable differences in the print. As a result, G-ID does not add anything to the object but exploits the patterns appearing as a byproduct of slicing, an essential step of the 3D printing pipeline. We introduce the G-ID slicing & labeling interface that varies the settings for each instance, and the G-ID mobile app, which uses image processing techniques to retrieve the parameters and their associated labels from a photo of the 3D printed object. Finally, we evaluate our method’s accuracy under different lighting conditions, when objects were printed with different filaments and printers, and with pictures taken from various positions and angles.

[project page] [doi] [paper] [video] [talk]
Featured on  3DPrintCom, logo Hackster.ioand ITMediaNews ITmedia (Japanese). 

DefeXtiles: 3D Printing Quasi-Woven Fabric via Under-Extrusion

Jack Forman, Mustafa Doga Dogan, Hamilton Forsythe, Hiroshi Ishii
2020 ACM User Interface Software and Technology Symposium (UIST)
Best Demo Honorable Mention

We present DefeXtiles, a rapid and low-cost technique to produce tulle-like fabrics on unmodified fused deposition modeling (FDM) printers. The under-extrusion of filament is a common cause of print failure, resulting in objects with periodic gap defects. In this paper, we demonstrate that these defects can be finely controlled to quickly print thinner, more flexible textiles than previous approaches allow. Our approach allows hierarchical control from micrometer structure to decameter form and is compatible with all common 3D printing materials. In this paper, we introduce the mechanism of DefeXtiles and establish the design space through a set of primitives with detailed workflows. We demonstrate the interactive features and new use cases of our approach through a variety of applications, such as fashion design prototyping, interactive objects, aesthetic patterning, and single-print actuators.

[project page] [doi] [paper] [video] [talk]
Featured on logo Gizmodo and MIT News logo MIT News.

FoldTronics: Creating 3D Objects with Integrated Electronics Using Foldable Honeycomb Structures

Junichi Yamaoka, Mustafa Doga Dogan, Katarina Bulovic, Kazuya Saito, Yoshihiro Kawahara, Yasuaki Kakehi, Stefanie Mueller
2019 ACM CHI Conference on Human Factors in Computing Systems

FoldTronics is a 2D-cutting based fabrication technique to integrate electronics into 3D folded objects. The key idea is to cut and perforate a 2D sheet to make it foldable into a honeycomb structure using a cutting plotter; before folding the sheet into a 3D structure, users place the electronic components and circuitry onto the sheet. The fabrication process only takes a few minutes enabling users to rapidly prototype functional interactive devices. The resulting objects are lightweight and rigid, thus allowing for weight-sensitive and force-sensitive applications. Finally, due to the nature of the honeycomb structure, the objects can be folded flat along one axis and thus can be efficiently transported in this compact form factor. We describe the structure of the foldable sheet, and present a design tool that enables users to quickly prototype the desired objects. We showcase a range of examples made with our design tool, including objects with integrated sensors and display elements.

[project page] [doi] [paper] [video] [talk]
Featured on logo

Magnetically Actuated Soft Capsule Endoscope for Fine-Needle Aspiration

Donghoon Son, Mustafa Doga Dogan, Metin Sitti
2017 IEEE International Conference on Robotics and Automation (ICRA)
Max Planck Institute for Intelligent Systems
Best Medical Robotics Paper Award Nomination

This paper presents a magnetically actuated soft capsule endoscope for fine-needle aspiration biopsy (B-MASCE) in the upper gastrointestinal tract. A thin and hollow needle is attached to the capsule, which can penetrate deeply into tissues to obtain subsurface biopsy sample. The design utilizes a soft elastomer body as a compliant mechanism to guide the needle. An internal permanent magnet provides a means for both actuation and tracking. The capsule is designed to roll towards its target and then deploy the biopsy needle in a precise location selected as the target area. B-MASCE is controlled by multiple custom-designed electromagnets while its position and orientation are tracked by a magnetic sensor array.

[doi] [pdf] [video]
Featured on Engadget Engadget and IEEE SpectrumIEEE Spectrum.

Research & Teaching Experience

Research Scientist Intern, Adobe
Document Intelligence Lab
Research Division
Advisors: Alexa Siu, Tong Sun (San Jose, CA)


Research Assistant, Massachusetts Institute of Technology (MIT)
Computer Science and Artificial Intelligence Lab (CSAIL)
Human-Computer Interaction (HCI) Engineering Group
Advisor: Stefanie Mueller (Cambridge, MA)


Teaching Assistant, Massachusetts Institute of Technology (MIT)
Department of Electrical Engineering and Computer Science
6.859 : Interactive Data Visualization (Spring 2021)
– 6.S897: Academic Job Search Seminar (Fall 2021)
6.810: Engineering Interactive Technologies (Fall 2020)


Visiting Researcher, Massachusetts Institute of Technology (MIT)
Computer Science and Artificial Intelligence Lab (CSAIL)
Human-Computer Interaction (HCI) Engineering Group
Advisor: Stefanie Mueller (Cambridge, MA)


Research Assistant, Max Planck Institute for Intelligent Systems
Physical Intelligence Department
Medical Millirobots Group
Advisor: Metin Sitti (Stuttgart, Germany)


Undergraduate Researcher, UCLA
Electrical and Computer Engineering
Laboratory for Embedded Machines and Ubiquitous Robots
Advisor: Ankur Mehta (Los Angeles, CA)


Undergraduate Researcher, Bogazici University
Haptics & Robotics Lab & Intelligent Systems Lab
Advisor: Evren Samur & Işıl Bozma (Istanbul, Turkey)

Undergraduate Teaching Assistant, Bogazici University
EE142 Introduction to Digital Systems
Fall 2016 & Fall 2017 (Istanbul, Turkey)

Conference Service

Organizing Committee

  • Posters Co-Chair: ACM UIST (’22, ’23)
  • Student Volunteer Co-Chair: ACM UIST (’19, ’20)
  • Social Events Co-Chair: ACM UIST (’21)
  • Organizer: ACM CHI Personal Fabrication Community Event  (’20)

Program Committee

  • ACM TEI: International Conference on Tangible, Embedded and Embodied Interaction (‘24)


  • ACM CHI Conference on Human Factors in Computing Systems (’20’21, ’22, ’23)
  • ACM UIST: Symposium on User Interface Software and Technology (’21’22)
  • ACM SIGGRAPH ASIA Conference on Graphics and Interactive Techniques (’22)
  • IEEE ICRA: International Conference on Robotics and Automation (’21)
  • ACM TEI: International Conference on Tangible, Embedded and Embodied Interaction (’21)
  • IEEE ISMAR: International Symposium on Mixed and Augmented Reality (’20)
  • ACM EICS: Symposium on Engineering Interactive Computing Systems (’21)
  • ACM IDC: Interaction Design and Children Conference WIP (’21)
  • IEEE Sensors Journal (’20, ’21).

Student Volunteer

Invited Talks

  • MIT Sloan, Machine Intelligence for Manufacturing and Operations (MIMO) Research Forum (Dec ’22)
  • University of Maryland Human-Computer Interaction Lab (HCIL) BBL Speaker Series (Oct ’22)
  • Samsung America Research, Mountain View (Jul ’22)
  • University of Santa Diego Design Lab (Aug ’22)
  • Stanford University HCI Lunch Seminar (Jun ’22)
  • University College London, Virtual Graphics and Computer Graphics Seminar Series (Feb ’22)
  • MIT Open Learning – MIT Horizon Talks (Mar ’22)
  • University of Chicago – Human-Computer Integration Lab  (Sep ’19)



Ph.D. in Electrical Engineering & Computer Science (’18 – present)
Massachusetts Institute of Technology (MIT)
Cambridge, Massachusetts


M.Sc. in Electrical Engineering & Computer Science (’18 – ’20)
Massachusetts Institute of Technology (MIT)
Cambridge, Massachusetts


B.Sc. in Electrical & Electronics Engineering (’14 – ’18)
Bogazici University
Past Chairman of the IEEE Student Branch (’15-’16)
Istanbul, Turkey


Exchange Student, Electrical and Computer Engineering (’17)
University of California, Los Angeles (UCLA)
Los Angeles, CA


Email: doga [at], doga [at]
LinkedIn: /in/dogadogan
Twitter: @mdogadogan

Here are a few pictures that highlight some of my favorite moments.

President Reif visits grad dorm Sidney-Pacific - MIT President L. Rafael Reif and Sr. Associate Dean of Graduate Education Blanche Staton help SP officers and volunteers prepare Sunday brunch (Cambridge, MA, 2019)


2024 Mustafa Doğa Doğan