Overlapping Fields

As I continue my master’s studies in Interactive Media and Communication while working in data science, I’ve started to notice a recurring theme between the structure of data and the variability of design. In data work, I’m trained to look for patterns, stability, and repeatability. In design fields, especially interactive and experience-driven ones, variability is not noise but rather the point.

From a traditional data science perspective, variability is something to be reduced. We smooth signals, control for confounding factors, and try to isolate the underlying relationship between variables. But in design disciplines like UX, and media production, variability is often what makes an experience feel alive. A user’s emotional response, the context of use, the device they’re on, even their cultural background create noise that designers intentionally embrace rather than eliminate.

This becomes especially clear when working with interactive systems. A button click, a scroll behavior, or a navigation path can be logged as clean, structured data. But those interactions are not purely mechanical events. The same action can mean different things depending on intent. A user might click quickly out of confidence, hesitation, curiosity, or confusion. From a dataset perspective, these all look identical. From a design perspective, they are fundamentally different experiences.

Through my experience in the two realms, I’ve found that this gap between measurable behavior and lived experience is where both the opportunity and the challenge lie. Data can help identify friction points in a user journey, but it cannot fully explain why that friction exists. A drop in engagement might signal a usability issue, but it might also reflect emotional fatigue, shifting expectations, or even aesthetic mismatch between the interface and the user’s mental model. Design decisions often emerge from interpreting these ambiguous signals rather than relying on definitive answers.

What makes interactive media particularly interesting is that it sits directly at the intersection of structure and subjectivity. Systems are designed to respond predictably, yet users rarely behave in predictable ways. This creates a feedback loop where design influences behavior, and behavior in turn reshapes design priorities. From a modeling standpoint, this is a constantly shifting distribution rather than a stable one, which challenges many of the assumptions that traditional machine learning approaches rely on.

At the same time, I’ve come to appreciate that variability is not a problem to be solved, but a condition to be understood. In design fields, inconsistency can be meaningful. Two users completing the same task in different ways may both be successful in the end, but their paths reveal different mental models of the system. Rather than forcing convergence, the goal often becomes supporting divergence while maintaining coherence.

This perspective has changed how I think about data science itself. Instead of treating data as a way to eliminate uncertainty, I’ve started to see it as a way to map the boundaries of uncertainty. In interactive media, those boundaries are where design lives. They define where systems are predictable and where human interpretation takes over.

Ultimately, working across data science and interactive media has taught me that not all variability should be reduced. Some of it should be studied, some of it should be designed for, and some of it should simply be preserved. The most interesting systems are not the ones that behave identically every time, but the ones that remain responsive to the complexity of the people using them.


Comments

Leave a comment