Fluid UI

Whether you’re navigating a website or an advanced space flight system, existing UI offers a sleuth of all possible options and requires commanding users to manually engage with all tiers to determine and execute singular goals; one-by-one, down the rabbit hole until you find what you’re looking for.

Can you imagine Jarvis with the limitation of a touch-tone navigation interface?

Fluid UI uses AI to determine intent; which translates to a more dynamic designation of interaction points in order to reduce user interface friction. Removing the noise and the toil from the interface enables non-direct instructional orders to be queued by the AI and executed by the commanding user.

Example: Oral user command: “Primary Objective: Return ship home safely.”

Result: a Fluid UI engages a mapping function to chart a course for ‘home’, cross-referencing accessible Auxillary Data points with Main Core Index to determine safe passage by asserting Waypoint Directives. Returning to user with a Boolean possibility status and percentage based likelihood with favouring routes; Awaiting command to either execute or redefine primary (or subsequent) objectives.

Let’s take a closer look at the example command word-by-word and how the AI would arrive at this conclusive report:

  1. Primary Objective [intent: override existing primary objective with following command]
  2. Return [intent: a journey – consult Main Core Index for access to primary navigation function(requires Subject and Destination parameters] 
  3. ship [intent: define Subject – consult Main Core Index to retrieve contextual definition of subject and its assumed parameters of scope (crew, decks, cargo, engines etc) 
  4. home [intent: define Destination – consult Main Core Index to retrieve contextual definition of destination and its assumed parameters of scope (Earth, Ireland, Dublin Space Port) 
  5. safely [intent: define Waypoint Directive – consult Auxillary Data Points to retrieve known dangers en-route to destination and input as parameters to primary navigation function] 

Glossary of terms

Primary Objective: The ultimate purpose of all existing functions. Can be overridden by commanding user. Any Secondary Directives must not jeapordise successful outcome of Primary Objective.

Waypoint Directives: Manditory sub-directives that must be completed to define Primary Objective outcome as successful.

Secondary Directives: Auxillary (optional) sub-directives that compliment the success status of the current Primary Objective.

Main Core Index: A knowledge base of all functions, historical directives and their outcomes.

Auxillary Data Points: Any available access points for external data source to feed Main Core Index.

AI Data Core: A knowledge base of all artificially authored functions generated to compliment Main Core Index in pursuit of a successful Primary Objective outcome. Can be disconnected by authorised users at any point.

Influence Of Human Level Artificial Intelligence on Addiction

Following the Asilomar conference last month [https://futureoflife.org/2017/01/17/principled-ai-discussion-asilomar], I started thinking about Human Level AI and potential impacts on society as we know it today. To frame the focus, I chose the topic of addiction and support methods that could be enhanced with this kind of technology.
Conversation is an immensely powerful tool. Influence and social capital are products of conversation, instruction and linguistic expression. Interpretation is subjective, however, which leads to a unique emotional experience for every evaluator. Simple words, spoken or written, can inspire radical happiness, invoke deep sadness or any level of emotional response in between, depending on the recipient’s personal understanding and current state of mind.

Reading up on super AI inspired me to think about the boundaries of this kind of technology; and its potential to utilise the power of these natural emotional responses. Imagine a machine that is able to converse with an individual, deconstruct their responses and then, just as quickly, accurately predict the forecasted release of endorphins or chemical levels in the individuals system with the intent of altering the branch of communication accordingly. This could enable ultimate control over this power of influence.

Humans easily influence others through conversation. Even with a basic understanding of the power of psychology, humans rapidly predict an expected response and their up follow up actions. The unknown variable is the interpreters true understanding of the expressed. Only to be validated by a measurement of the chemical changes physically occurring within the hearer/reader. Assuming an AI that can harness these abilities and instantly assess the data to inform further expressions, there are questionable dilemmas.

Addicts and substance abusers who actively seek help in controlling their addictions often require the help of psychological advice and ongoing moral support. These conversations boost will-power, which in turn helps to suppress cravings through chemical reactions in the brain. Through the art of supporting and encouraging conversations, people can overcome immensely powerful personal challenges.

If human level AI of this magnitude is coupled with the power of instantly monitoring these chemical levels in the brain while soliciting encouragement that specifically boosts the individuals unique will-power, then presumably the affects would be much more instant. To create influence is to manipulate ultimately at a chemical level.
The ethical question that is begged here; to what level is it acceptable to allow AI to influence another living being, if at all. Presumably, to the benefit of the being, but under who’s interpretation of ‘benefit’? The more informed AI or the disadvantaged being?

In the case of above, the human’s reactions and responses are untrustworthy even to himself. Should an overpowering craving cause a relapse, then the human’s intent is compromised. If the AI has the power to subdue this craving, should it act on this power? Or allow this relapse to happen? In this case, help from the AI can only be successfully performed by disobeying the addicts wishes?

Surely professionals who currently offer supportive services with the aim to help someone overcome such daemons inherently tackle this kind of ethical dilemma – to advocate on behalf of those who cannot help themselves. Should Human Level AI hold true this responsibility if it is in its power?

Technology is constantly used and improved to overcome obstacles in medical sciences and this should no different in my opinion.

We, the Jugglers

We juggle. We’re jugglers. We juggle life until we drop all the balls. We juggle time, home, family, health, ambitions, friends, careers… And the balls are not of equal size nor equal weights. No. They’re all different – in-fact they change mid-air. One ball is nice and light; controllable. Then in a moment, it’s too heavy to handle. All we want is balance. For the balls in our act to balance out. Because, it’s not the juggling that bothers us. No; The juggling is not bad at all. When we see past the hurt, exhaustion and fear of dropping – we catch that glimpse. That awe inspiring, motivational reminder of why we’re juggling in the first place. And it’s only a glimpse that we need. That little insight that gives us enough energy to push through and keep on going. Sometimes it’s in a look in your wife’s eye; a child’s laugh; a strangers compliment; a pat on the back from a colleague…the glimpse can come from anywhere. So keep on juggling. And give someone a glimpse every now and then.