Hello, ISSIPers! As ISSIP’s Ambassador to 28 Digital, I have the privilege of working at…
The Semantics of Innovation: Bridging the Credibility Gap in Deep Tech
Written by: Andrea Biancini
In my work managing large-scale digital transformation programs and evaluating deep-tech startup portfolios, I have observed a recurring paradox: the most disruptive innovations often face the greatest barriers to adoption—not due to technical shortcomings, but because of linguistic friction.
As we push the boundaries of AI, XR, and cloud-based systems, we are no longer engineering technology alone; we are engineering perception. When the vocabulary used to describe an innovation activates the wrong cognitive schema, the market may reject the solution long before it has the chance to evaluate the code itself.
The Credibility–Disruption Paradox
Innovation is, by definition, disruptive. Yet successful adoption depends on a delicate equilibrium: a technology must appear advanced enough to be valuable, while remaining familiar enough to be trusted.
We encountered this tension directly while designing educational initiatives within 28DIGITAL. The curriculum focused on a next frontier of AI—systems capable of interpreting and responding to human states. Our initial terminology emphasized the “human” dimension of the technology. The outcome was unexpected but instructive: skepticism. The program was perceived as “soft,” or even “pseudo-scientific,” despite its strong technical foundations.
The turning point came not from changing the curriculum, the technology, or the learning outcomes—but from changing the language. By reframing the program using the term Affective Computing, we moved away from subjective or relational descriptors and adopted a precise technical taxonomy. This shift immediately reduced perceived risk, signaled scientific rigor, and aligned the program with the “hard tech” expectations of the industry.
Similarly, we adjusted how we articulated the value proposition. Rather than positioning the initiative as a radical disruption, we framed it as an evolutionary capability—an advanced but logical progression of existing AI systems. This subtle change lowered the psychological barrier to adoption by making the innovation feel familiar rather than alien.
Finally, we anchored the program within standardized and ethical frameworks, explicitly connecting it to the EU digital skills agenda and emerging regulatory safeguards. Doing so reframed the initiative from an experimental endeavor into a structured, credible response to well-recognized policy and market needs.
Innovation as Cognitive Infrastructure
In deep tech, communication is not a secondary concern—it is infrastructure. It is the interface through which investors, learners, policymakers, and institutions engage with innovation.
Three dimensions are particularly critical:
Linguistic Interoperability
Just as we design APIs for systems, we must design conceptual APIs for ideas. Terminology must integrate seamlessly with the existing mental models of the target audience.
The Psychology of Trust
From a psychological standpoint, cognitive dissonance arises when highly complex technologies are described using vague or emotionally loaded language. Precision is not pedantry; it is a prerequisite for trust—particularly in strategic domains such as AI and digital sovereignty.
From “What” to “How”
Effective communication shifts attention away from potentially misleading labels toward systemic impact: how the technology optimizes services, augments decision-making, or builds human capability at scale.
Strategic Questions for Service Innovators
As we design the next generation of service systems, several questions deserve careful reflection:
- Are we over-humanizing—or under-specifying—our innovations?
- Does our terminology obscure technical value rather than clarify it?
- Can we map emerging technologies onto existing taxonomies to frame disruption as advanced iteration rather than radical uncertainty?
Conclusive Reflections
The most significant innovations are often the hardest to recognize because they lack a stable category in the public imagination. To accelerate adoption:
- Audit your nomenclature: Replace subjective labels with academically grounded terminology to establish immediate authority.
- Align language with rigor: Ensure naming conventions reflect the true complexity of the system.
- Design perception deliberately: Treat go-to-market vocabulary as an integral component of the technical architecture.
Has a name ever stood in the way of your innovation? I am curious to hear from this community: have you encountered projects where strong technical value was undermined by terminology? How did you navigate the balance between being human-centric and maintaining scientific credibility?
