Introducing myself.
I live and breath code, data, and algorithms. But the types of problems I want to solve are human, not only technical. My goals are to tell stories, to inspire ideas, and to design the ways that people interact with, relate to, and even think with one another.
To put that in design and tech industry jargon, I am part Software and Data Engineer, part User Experience and Human Factors Researcher, with a special emphasis on data science and its relation to product design and development.
I've worked in every part of the digital product/service/installation design cycle. This includes design research and data analysis; sketching out interactive design and prototyping; and developing production code and testing.
Magic happens when two of today's most revolutionary fields, Human-Centered Design and Artificial Intelligence, are intimately intertwined.
I have broad experience in using various technologies, languages, frameworks, and math. But I have a deep background in exploring and applying Artificial Intelligence approaches to interactive experiences of all kinds and in leveraging the psychological and social sciences for design -- from game AI to data analysis and machine learning; from social robots and computer vision to chat bots.
Take a look at some of my projects and let me know what you think! I'm always stoked to hear about interesting ideas or possible collaborations, especially collaborations at the intersections of art, engineering, and science.
My Backstory.
In school, I studied Philosophy, the Cognitive Sciences, and Computer Science. And since, I've worked in both Academic and Commercial settings.
After finishing my MSc in Artificial Intelligence at Edinburgh, I took a Visiting Faculty position at the Institute for Simulation and Training. There, as a part of the Robotics Collaborative Technology Alliance, I worked with psychologists and roboticists to develop a research program around how humans and robots could interact and communicate in real environments. I worked to make connections between research labs spread across the country, collaborated on experimental design, and I wrote things like this, describing and framing the ideas and practices of the cognitive sciences for design.
After that, I started my Phd in Computer Science at Georgia Tech where I worked in the Adaptive Digital Media Lab and the Socially Intelligent Machines Lab. My time there was funded by the National Science Foundation, writing curriculum for EarSketch, a system that teaches kids to code while creating and remixing music; and by PSA Peugeot Citroën, working with 3d depth sensors to track car parts using CAD models. My typical lab duties included things like updating our robotics framework to work with Robot Operating System and sketching out what it would mean for a robot to establish common ground with a person. It was also during that time that I created the first version of Soundsketch, a musical instrument you play by drawing with markers on paper and touching your drawings with your fingers.
At the beginning of my second year at Georgia Tech, I got the opportunity to join a creative technology group at Wieden+Kennedy and so put my degree on hold to go work with all manner of designers, artists, and makers in Portland. There, I created everything from a robot that needs help to do anything to newer versions of SoundSketch to a chat bot that helps you with your taxes to animated story telling with data.
And finally, since 2016 I've been on Instrument's Google team, working with a variety of Alphabet organizations (my faves being Brain, Cloud, Jigsaw, and Environment). It's been a lot of full stack software engineering but with a particular focus on data science process, especially with the python scientific stack and Google Cloud Platform tools --as it turns out, really powerful tools!
If you're interested in checking out some of my work, check out the project page for more details.