The Kingsman National Institute has released a significant, dual-front publication that directly confronts the complex ethical and legal fallout of its own technological advancements. In a coordinated effort, researchers from our Life Sciences faculty and the Aegean Informatics Laboratory (AIL) have announced a breakthrough in predictive health modelling, while our PPL (Philosophy, Politics & Law) faculty has simultaneously published a proposed legal framework to govern its use.
This internal, parallel development process is a core tenet of our “Athenian Synthesis” model. We believe it is not enough to simply create powerful technology; an academic institution has a profound responsibility to simultaneously build the ethical and legal “operating systems” necessary to manage it.
The scientific component, a project led by Professor Lars Jensen (Molecular Biology) and Dr. Matic Novak (AIL), details a new machine learning model. This system analyses a complex array of genomic markers and proteomic data to predict the probable onset window for certain neurodegenerative diseases, such as Parkinson’s or specific motor neurone conditions, years before any clinical symptoms become apparent.
The model, trained on extensive, anonymised longitudinal health data, has shown a statistically significant—though, we must stress, not infallible—level of predictive accuracy. It represents a potential paradigm shift in diagnostics, moving from reactive treatment to proactive, preventative care.
However, the very success of this predictive tool creates an immediate and profound societal dilemma: the conflict between the “right to know” and the equally critical “right to genetic ignorance.” What is the legal and moral status of information about a future illness that is only probable, not certain? How does this knowledge impact a person’s life, employment, and financial security?
This is where our synthesis model becomes critical. The corresponding legal paper, authored by Professor Zofia Kaczmarek (Jurisprudence) and Dr. Sofia Costa (Applied Ethics, CDEG), provides a direct and robust answer. Published in a leading European legal journal, their work proposes a new “Genetic Information Firewall” framework designed for adoption by EU policymakers.
This legal model argues that predictive health data derived from AI must be classified as “Speculative Probabilistic Data,” granting it a unique and highly protected legal status.
The Kaczmarek-Costa framework explicitly recommends that this Speculative Data be made legally inadmissible in all contexts of insurance underwriting and employment decisions. They argue that to allow a corporation to set premiums or deny employment based on a probability of future illness—a probability generated by a non-transparent algorithm—constitutes a new, undefended, and unacceptable form of genetic discrimination.
Furthermore, the framework champions the individual’s “right to un-know.” It proposes a mandatory, dual-key legal mechanism. Under this system, this predictive data could only be unlocked and revealed to the patient following explicit, time-locked consent, and only after a mandatory consultation with an independent medical ethics counsellor. This ensures a person cannot be coerced, by family or employers, into accessing information they do not wish to have.
This dual publication is a deliberate, and sometimes uncomfortable, internal process. By forcing our own science and philosophy faculties into this direct dialogue, the Kingsman National Institute aims to provide a more holistic and responsible model for 21st-century innovation—one where we not only ask “Can we do this?” but simultaneously provide a rigorous answer to “How must we control this?”

Leave a Reply