MIT

I am a Research Scientist at Adobe Research in Basel, Switzerland, working in the AI Experiences Lab. I received my PhD from MIT, where I worked in the Computer Science & Artificial Intelligence Laboratory (CSAIL).

Toward Ubiquitous Intelligence: The Future of Information Access across Digital & Physical Realms

Ubiquitous Artificial Intelligence

My research is motivated by a simple but profound shift: the way we access information is undergoing its biggest transformation since the invention of the web. We are moving from browsing websites and querying search engines, to interacting with LLMs, personal AI agents, and soon, spatial computing platforms. These emerging interfaces require systems that deliver contextual, reliable information anywhere, at any moment. My work explores what this transformation means for the future of ubiquitous intelligence, where knowledge becomes accessible across the digital and the physical.

Google Scholar  ·  LinkedIn  ·  Twitter  ·  Adobe profile  ·  MIT profile  

Email: doga [at] {adobe.com, mit.edu, csail.mit.edu, acm.org, ieee.org}

My research spans two complementary directions related to ubiquitous intelligence:
1) GenAI- and agent-driven tools for next-generation information access.
At Adobe, I design systems that help organizations and creators adapt to an AI-first information ecosystem. This includes the AEM Sites Optimizer and Adobe LLM Optimizer, which support content quality and discoverability in LLM-dominated environments, i.e., for the emerging field of generative engine optimization (GEO). I also develop interfaces for agentic AI, such as Adobe’s Project Get Savvy, and explore multimodal, contextual AI+AR interactions, e.g., augmented object intelligence.
2) Metadata-driven intelligence and interaction for everyday, real-world objects.
During my PhD at MIT CSAIL, I developed new ways for physical objects to carry unobtrusive metadata that AI systems can interpret with high reliability. This line of work — ubiquitous metadata — includes systems such as G-ID, InfraredTags, BrightMarker, and Imprinto, which embed machine-readable information directly into materials. This allows AI and AR systems to perceive objects with guarantees that vision-only algorithms cannot achieve.
Across both directions, my goal is to build AI systems that support the future of information access to fluidly connect digital and physical contexts as part of a broader shift toward truly ubiquitous intelligence.

I regularly collaborate with interdisciplinary teams and present my findings at major CS and human-computer interaction (HCI) conferences. For more information about my projects or to discuss potential collaborations, please feel free to reach out to me via email.