Artificial Intelligence and Patents

I recently had the privilege to speak at the 21st Annual Symposium on Intellectual Property Law and Policy I was on a panel with IP professionals from Patterson+Sheridan and Navid Alipour from Analytic Ventures. I gave a brief introduction to AI and machine learning to set the stage for our panel discussion for the non-technical audience, and then I described some of the challenges I see with patents in the AI and ML space based on my many years of writing and reviewing patent materials. AI at IP at UA Law has a more detailed summary of our panel.

The Alice decision has greatly confused what is and isn’t patent eligible, and that definitely impacts patents around AI and ML. The Honorable Andrei Iancu, Under Secretary of Commerce and Director of the United States Patent and Trademark Office, described his thoughts and actions to improve the situation, but the fundamental problem can only be solved by Congress and the Courts. Let’s hope they take meaningful action soon, so developer’s expensive investments AI and ML can be protected (if they so choose).

In any case, I tried to do my part to help educate those IP professionals that may be in a position to actually influence the future.

How should we understand the word “Understand”

What does the word “understand” mean?.  From the outside, is it possible to know whether someone — or some AI program — “understands” you.  What does that even mean?  

I assert that if you “understand” something, then you should be able to answer questions and perform tasks based on your understanding. If there are multiple tasks, then there are multiple meanings of “understand”.  Consider this classic nursery rhyme:

Jack and Jill went up the hill
To fetch a pail of water
Jack fell down and broke his crown
And Jill came tumbling after

There are many different tasks an AI program can perform, leading to multiple different meanings of “understand”.  Different programs can perform different tasks: 

  1. Return Counts: 4 lines, 25 words
    A simple procedural program can possess a very rudimentary understanding of the text.
  2. Return Parts of speech: nouns: Jack, Jill, hill, … verbs: went, fell, broke
    Simple NLP processing can understand each word’s part of speech.
  3. Return Translation: Jack und Jill gingen den Hügel hinauf …
    Translation between, say, English and German, requires more understanding of the text to ensure noun and verb agreement, how to properly reorder the words, etc. 
  4. Return Summary: Story about boy and girl’s disaster while doing a daily task
    Summarization is a much harder task. 
  5. Use Common sense: it’s odd they went uphill to get water
    It’s “just common sense” that you go down the hill to get water, not up the hill.  This is a very hard problem. 
  6. Create Interpretation: Attempt by King Charles I to reform the taxes on liquid measures. He … ordered that the volume of a Jack (1/8 pint) be reduced, but the tax remained the same. … “Jill” (actually a “gill”, or 1/4 pint) is said to reflect that the gill dropped in volume as a consequence. [Wikipedia]
    I love this explanation about the nursery rhyme from WIkipedia: it was political condemnation, encoded as a poem, about King Charles’ attempt to raise tax revenue without changing the tax rate.  A program that could return explanations like this have an extremely deep understanding of the poem and its social and political context. 

Of course, we could add many other definitions/tasks to this list, each leading to a new definition for “understand”. As the list grows, some pairs of definitions can’t be ordered according to difficulty, so that list would not be totally ordered.

This highlights a major source of confusion.  A company whose software implements a simple task (high on this list) can correctly claim their software “understands”. But the lay public most often interprets “understand” to be a very complex task (low on the list). When this happens the company has “overhyped” or “oversold” their software.

The fundamental problem is some words, like “understand”, are just too vague.  Eskimos have over 50 words for different kinds of “snow”, each describing a particular shades of meaning. I assert we need more granular words for “understands” — and other similarly vague words — to represent the different shadings.

A good example of what I mean comes from the US National Highway Traffic Safety Administration (NHTSA). They define multiple capability levels for autonomous vehicles:

  • Level 0: The driver (human) controls it all: steering, brakes, throttle, power.
  • Level 1: Most functions are still controlled by the driver, but a specific function (like steering or accelerating) can be done automatically by the car.
  • Level 2: at least one driver assistance system of “both steering and acceleration/deceleration using information about the driving environment” is automated, like cruise control and lane-centering. … The driver must still always be ready to take control of the vehicle.
  • Level 3: Drivers are able to completely shift “safety-critical functions” to the vehicle, under certain traffic or environmental conditions. The driver is still present and will intervene if necessary, but is not required to monitor the situation in the same way it does for the previous levels.
  • Level 4: “fully autonomous” vehicles are “designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip. However, … it does not cover every driving scenario.
  • Level 5: fully-autonomous system that expects the vehicle’s performance to equal that of a human driver, in every driving scenario—including extreme environments like dirt roads that are unlikely to be navigated by driverless vehicles in the near future.

If a company claims they offer a “Level 3” car, the public will correctly know what to expect.

So the next time someone says “I understand”, give them a few tasks to see how deeply they really do “understand”.

What do you think?  Did you “understand” this post.  🙂

AI Era Requires a Probabilistic Mindset

You may have heard that AI (or cognitive) era applications are “probabilistic”. What does that really mean?

Let me illustrate with two hypothetical applications.

First Application

You hire me to write a billing application for your consulting firm.  You give me the list of all your employees and their hourly billing rates.  You also give me last year’s data, including the number of hours each consultant worked for each client account and the bills you generated.

I do my development and come back to you later and say “I’ve finished the application and testing shows it computes the correct amount for 90% of the bills”.  What do you say?  You say,

  • You’re not done yet
  • You’re not getting paid yet
  • Get back to work

This is a classic “procedural era” application.  We all expect — and demand — 100% correct answers for procedural era applications.

Second Application

Now let’s change the requirements a little bit. Sometimes you give discounts and sometimes you give freebies.  You do this because it makes clients happy — and happy clients are important to your business.  Sometimes you do this for your largest clients, because they bring you so much business.  Sometimes you do this for clients that are considering big orders, because you want to show how much you value them.  Sometimes you do this for “friends & family” clients, because increasing their happiness increases your own happiness.

Again, you give me the bills for the last year.  And again, I do my development and come back to you and say “I’ve finished the application and testing shows it computes the correct amount for 90% of the bills”.  What do you say this time? You say

  • This is fabulous
  • Here’s a bonus for such a high accuracy rate
  • I’ve got this other program I’d like you to write for me.

Why such a different response between these two similar applications?

This is a classic “cognitive era” application. There’s no obvious formula for how to make clients happy.  An expectation for 100% correct answers is completely unrealistic.  For some medical diagnoses, even expert physicians only agree with each other about 85% of the time, so how can we ever even think a computer program can be correct 100% of the time.

The example illustrates that we must change our mindset as we move into the cognitive era. While a few application achieve NEAR (but not exactly) 100% accuracy (e.g. hand-writing digit classification), many successful cognitive era applications achieve well below 90% accuracy. 70% and 80% accuracy is the best we’ve been able to achieve.

Perfection is simply not a realistic goal in the cognitive era.

That’s why we say cognitive applications are PROBABILISTIC.


Scott N. Gerard

New Job

After  spending almost 5 years working in IBM’s Watson and Watson Health groups, I am moving on.

I am extremely pumped to join IBM Research.  This is a dream come true !

We’re beginning work on a project called Cognitive Eldercare.  Our goal is to keep elders in their home (or assisted living facility) as long as possible and prudent.  With the planet’s aging population, there is an absolutely gi-nor-mous market for eldercare solutions, especially those that cognitively integrate many disparate aspects.

My task is to build the key foundational layer called Knowledge Reactor which is a large, Titan, graph database, extended to emit (react to) graph changes by publishing those changes to a Kafka messaging infrastructure.  We’ll also work (play is more accurate) with lots of IoT devices:  sensors, effectors, robots, drones, and who knows what else.

All that would be cool enough.  But the best part, is we’re going to be writing a lot of agents that react to incoming events and graph change events.  So, I’ll get to build more multi agent systems, and extend my PhD research.

Can life get any better?  God is good !

The term “Data Scientist” is unstable

I recently attended a National Consortium for Data Science event at a local university (University of North Carolina at Chapel Hill).  As you can imagine, many people were talking about Big Data and Data Scientists.  But my opinion about the term “Data Scientist” seems to differ from everyone else’s.

I first heard the term “Data Scientist” about 5 years ago when I joined the Watson group at IBM. No one seemed to know what the term really meant then, but the idea was the world of Big Data was going to require a lot of new skills and that very few people had the necessary skills to successfully compete. So we had to get started building those skills now.

At that time, the mental image that formed in my head was a “cloud of skills”:  that is, a cloud of points where each point represented one skill.   The cloud contained a rather large number (20-30) of points/skills.  We could certainly identify some of the points/skills (data collection, data cataloging, model building, machine learning, etc), but it was assumed that some of these necessary, new skills were currently unknown or at best ill-defined.  I imagined the cloud was currently diffuse, but over time, as everyone began to better understand just what was required, the cloud of skills would contract, becoming denser and better focused.

Now, some five years later, the situation seems to have only gotten worse.  The term “Data Scientist” has become an all-inclusive, catch-all and kitchen sink.  Whenever some sees something that seems to be required, they toss it into the cloud of skills every Data Scientist “must have”.  The cloud is getting bigger.  It is getting broader. It is getting more diffuse.

I agree that many skills are needed to adequately work with big data.  Some of the skills in this cloud are

  • business analyst who identifies what a business needs or would find valuable along with plausible ideas of what might be technically possible
  • architect capable of turning the business analyst’s vision (which is likely partially right and partially wrong) and converting it into something that can actually be built
  • data gatherer (both raw data and ground truth)
  • digital rights manager
  • data manipulator
  • data organizer and cataloger
  • model builder
  • machine learning expert
  • visualization builder
  • security architect to ensure data is protected
  • DevOps person to continuously fuse all these parts together over the course of many experiments
  • statistician
  • lawyer to oversee the contracts across the many different parties involved
  • dynamic presenter who can persuasively demonstrate the solution

Sure, there may be a few brilliant, lone wolf geniuses out there who possess all these skills.  But, realistically, it is inconceivable to me that there will ever be a large number of people who have all of these skills, and thus become true examples of the all-inclusive “Data Scientist”.

Instead of trying to jam all of these skills into a single person, what we really need are “Data Teams”.  Every other kind of engineering uses teams, so why should the world of Big Data be different?  I predict two possible evolutions for the term “Data Scientist”:

  1. [all-inclusive]  The term Data Scientist continues to be all-inclusive, becoming so broad and so ill-defined that it becomes unstable and completely collapses, signifying nothing useful.  It will be  added to the trash heap of unused terms.
  2. [data-focused]  The term Data Scientist will come to mean a person responsible for the large volumes of data.  In the list above, this includes data gatherer, digital rights manager, data manipulator, and data organizer.  Of course people who play this role can have other, non-Data Scientist skills, too.

Let’s stop pretending we are searching for, or attempting to train, individuals who possess all of these skills.  The all-inclusive version of the term “Data Scientist” is not stable.

I strongly encourage we adopt the “data-focused” version, because I believe we absolutely need people to perform this new and critically important role. Data is a new element to the puzzle and it will require special tooling and expertise.  And Data Scientist seems to be the perfect term for someone concerned with the technical, organizational and legal aspects of Big Data. But we need to see the Data Scientist as just one member of a larger Data Team.

What do you think?  Is there a better term for a data-focused member of the team?  What skills can we realistically expect from a “data-focused” person?

Free Will and God

I am both a scientist and a Christian.  Many people—from both camps—sometimes see these two disciplines as fundamentally incompatible, but I adamantly disagree.  This post explores one question that has troubled me in my Christian walk and may have troubled you:  can free will and predestination both be true? I don’t even begin to claim I’ve solved this riddle, but I describe one way in which both concepts co-exist.

My argument consists of three points.

Point 1 — We all played Tic-Tac-Toe as kids.  At first, it was a lot of fun, but then we tired of it and quit.  Why?  Because sooner or later, we realized that the first player has a big advantage, and then the sport is gone.  Wikipedia’s article describes a strategy for the playing the game of Tic-Tac-Toe where the first player is guaranteed to either win or draw.  Playing this winning strategy ensures the first player NEVER looses. (There is no strategy for the game of Tic-Tac-Toe that ensures a player always wins, but such games exist.)

Now, no one would argue the first player can control or over-ride the second player’s free will.  But still the first player can force (predestine) a win or a draw.  That is, when the first player plays the strategy, he has partial predestination of the outcome; he can “predestine” some important elements of the outcome (e.g. winning vs. loosing), even if it can’t “predestine” every detail (e.g. the second player’s exact sequence of moves).  Both free will and predetermination happily co-exist in Tic-Tac-Toe.

Tic-Tac-Toe is a very simple game.  What about other games?  We simply don’t know whether the exists a winning strategy for either player in chess.  The state space for chess is monumentally too large for a brute force analysis, and no one has produced any other evidence a winning strategy does, or does not, exist.  We simply don’t know. But “maybe”.

Another example is the “Game of the Universe” (GOTU) —our life in this physical universe—which is infinitely more complicated than chess.  Rather than chess’s discrete state space and set of moves, the state space and possible moves for GOTU are continuous and therefore (doubly) infinitely larger (at least aleph-1).  If we can’t figure out whether a winning strategy exists for chess, what chance do we have of figuring out whether there exists a winning strategy for GOTU.  Again, we simply don’t know.  But “maybe”.

So my first point is (1) some games have winning strategies where one player can force play to his desires, and (2) while we have no evidence GOTU has a winning strategy, we don’t have any evidence such a strategy does NOT exists.  All we can say it is conceivable a winning strategy exists.

Point 2 — In a completely different kind of game, there are 5 reasonable properties we desire from a voting system (non-dictatorship, unrestricted domain, independence of irrelevant alternatives, positive association of social and individual values, and non-imposition).  We are reasonable and right to seek voting systems that satisfy all 5 criteria.  But Arrow Impossibility Theorem showed that it is not LOGICALLY possible to have such a system.

So, my second point is, even though we may desperately DESIRE (even demand?) a set of properties hold about a system (here, GOTU), it is POSSIBLE that those properties are logically incompatible.  So we should be careful about our wishes/demands.

Point 3 — All I’ve done so far, is to illustrate that (1) predestination (winning strategies) and free will can co-exist (Tic-Tac-Toe), and (2) it is not always possible to construct systems that have all the properties we desire they have (voting systems).

What does this all mean?  It means God COULD have constructed our world to include both predestination and free will.  It may strike some readers, that it is horribly “unfair” for God to create the world (GOTU) in such a way that he always “wins”.  And some readers may complain that it is “unfair” that our free will does not give us the freedom to alter the final state of the world.  In short, some people may question how can God be so “unfair”.

My only answers are (1) if we don’t know whether or not the game of chess has a winning strategy, how can we make demands on God about whether or not the GOTU should allow a winning strategy.  Plus, it would be ridiculous to made demands that are not even logically consistent.  But God gives His own best answer to an accusation that He might be “unfair” regarding these matters:

Where were you when I laid the earth’s foundation?   Job 38:4

So, I encourage you to exercise your partial free, knowing that the overall end of the world can turn out exactly as God desires it (partial predestination).

What is Cognitive Computing?

Cognitive computing is all the rage these days.  But what is it, really?  I’ve been thinking about it quite a bit lately, and I believe I have come to a few novel conclusions.

Wikipedia has a nice long article about Cognition.  It expansively covers a great many things that I would agree are “cognitive”, but not (yet) “cognitive computing”.  I’m interested in writing cognitive software; not is constructing an full, artificially intelligent, “faux human”.  So, I’ll focus only on just cognitive computing.

Rob High, CTO of IBM’s Watson Group, defines cognitive computing as four “E”s.

  • Cognitive systems are able to learn their behavior through education
  • That support forms of expression that are more natural for human interaction
  • Whose primary value is their expertise; and
  • That continue to evolve as they experience new information, new scenarios, and new responses
  • and does so at enormous scale.

Adaptive

I agree with education and evolve, although I see these two as similar concepts.  To make these ideas fit my somewhat artificial classification system below, I rename this idea to adaptive.

Ambiguity

However, I disagree with limiting the definition to human expression.  There are many processes that I believe require cognitive skills that are not naturally interpreted by humans.  Dolphin and bat echo-location are good examples; they are a kind of “seeing” but humans can’t do it.   Any application that can monitor the network communication into and out of an organization and correctly identify data leakage gets my vote for “cognitive”, even though humans can’t do it.

Ambiguity is a better criterion than human expression.

Many human expressions are difficult to interpret because they are ambiguous.  I offer the following two examples.

  • Natural language is very ambiguous.  The classic sentence “Time flies like an arrow; fruit flies like a banana” has many different possible interpretations.  Sentences can have ambiguous parses (is “time” a noun, or is it an adjective modifying “flies”; is “flies” a verb or a noun; etc).  Words can be ambiguous, commonly called word sense disambiguation (WSD).  Is “bass” a fish or a kind of musical instrument?
  • Human emotions are ambiguous.  They require interpreting facial expressions, body language, sarcasm, etc. And people often disagree on the proper interpretation of a person’s emotion:  “Is Bob angry at me?”  “No, You know how he is.  He was just making a joke.”

To be more precise, an input is ambiguous when there are multiple output interpretations consistent with that input.  The goal is to determine which output interpretation(s) are, in some sense, most appropriate. Many elements (surrounding context, background knowledge, common sense, etc.) help decide which interpretations are most appropriate.

Pushing this idea farther, we should change from discussions of structured data vs. unstructured data and start discussing unambiguous data vs. ambiguous data.

From:  structured data vs. unstructured data

To:     unambiguous data vs. ambiguous data

There are many cases where structured vs. unstructured misses the point.  A row of structured data is easy to process not because it is physically separated into separate fields.  It is easy to process because here is only one way to interpret that row.  Structured data can even be ambiguous, in which case we need to “clean the data” (remove ambiguity).  Java code has exactly the same structure as natural language, but compilers are not “cognitive” because the Java programming language is unambiguous.

The fundamental problem is to accept an ambiguous input plus its available context, and search through the space of all possible interpretations for the most appropriate output(s).  That is, a cognitive process is a search process.

In the Programmable Era, programmers were able to resolve low levels of ambiguity by the seat of their pants, either because there were few possible interpretations or because interpretation resolution could be “factored” into a sequence of more or less independent resolution steps.  But as the amount of ambiguity increases, programmers are unable to satisfactorily resolve ambiguity by the seat of their pants.  In the Cognitive Era, programmers need Ambiguity Resolution Frameworks (ARFs) to help them process large amounts of ambiguity.  Machine learning is one kind of ARF which takes as input multiple features (each of which can be understood by the programmer) and combines all the features together to resolve down to few interpretations (note that I’m not requiring  ARFs to perfectly resolve all ambiguity to a single interpretation).  The Cognitive Era is largely populated by cases where imperfect resolution of large  interpretation spaces is an unavoidable consequence of the input’s irreducible ambiguity.

Action

I also disagree that expertise is a defining criterion for cognitive computing.  A better, more inclusive, criterion is action.

Only humans can accept and interpret expertise.  Requiring a cognitive system to output expertise necessarily forces a human “into the loop”.  While appropriate is some cases, it is wrong to require a human in the loop of every cognitive system.  Rather, we should encourage the development of autonomous systems that are able to act on their own.  The distinction between expertise and action is not completely black and white:  Watson’s Jeopardy! system did both by ringing a buzzer (action) and providing an response (expertise).

Many years ago, IBM defined the “autonomic MAPE loop” consisting of four steps: M: monitor or sense, A: analyze, P: plan, and E: execute or effectors or act.  Not all cognitive systems must contain a MAPE loop, but I see it as more inclusive than the 4 E’s above.  Expertise is best characterized as the output of the Analysis step, requiring a human to perform the Plan step.  The Observe-Interpret-Evaluate-Decide loop is similar to the MAPE loop, with Observer=M, Interpret & Evaluate=A, Decide=P & E.  But they both end with an action.

So instead of the 4 E’s, I suggest we define cognitive computing by the 3 A’s:  Adaptive, Ambiguous, and Action.

Encouraging Industry Adoption of Multiagent Systems

AAMAS15I just recently attended the AAMAS (2015 conference in Istanbul, Turkey.  One of the key topics I wanted to explore (again), is how to encourage adoption of autonomous agent and multiagent technology by industry practitioners. There were many others who were equally interested in this same topic.

Milind Tambe chaired a very interesting discussion panel on that very topic the first afternoon.  This is not a case theory vs practice.  There was wide agreement that AAMAS is, and should be, focused on both theoretical and practical aspects of these technologies.

Michael Wooldridge said we shouldn’t kid ourselves.  There are built-in incentives throughout the academic system that naturally encourage more theoretical work.  And AAMAS members are unlikely to significantly modify those incentives.

There was also general agreement that AAMAS should encourage more industrial participation.  But Paul Scerri, who has done a quite a bit of practical application, noted that most of the application-oriented papers he’s submitted in past get rejected.  His rejection comments included:

  • need more detail on <blank>:   8 pages in not enough room to cover even a 2nd level of detail on a running system
  • not novel:  practical instantiations of technology seldom include novel elements.  They are primarily trying to combine multiple elements from others into a working whole.
  • No statistical significance: the primary test for most practical applications (particularly first generation ones) is whether they work or not.  Statistical significance is not a critical aspect.
  • Why didn’t you use <blank>:  Again, the goal of most practical applications is to get them to work reasonably well.  Detailed architectural or technology trade-off discussions are secondary.

In short, Paul compared the time and effort he invested into his submitted papers and succinctly concluded

EU(not submit) > EU(submit)

Sarit Kraus talked about the time she did an radio or TV (?) interview about one of her applications.  Afterwards, multiple people came up to her saying “I have a similar problem“.  Some of the problems were similar; some where not that similar, but it at least got a problem-owner talking with a potential problem-solver.  Promoting a variety of MAS applications to the public, in the hopes of eliciting a “similar problem” response, seems like a very effective way to communicate capabilities and encourage more agent adoption.  Therefore, we should spend more time describing example applications, both at AAMAS and through other forums.

I have often been correctly described as more “solution driven” (a solution seeking a problem) rather than “problem driven” (a problem seeking a solution).  But, in technologies that require deep skills, like ours, “solution driven” approaches are likely to be more effective than “problem driven” approaches.

It seems to me, a separate conference or workshop, focused specifically on industrial application, would be beneficial.  The paper reviews should focus on practical aspects, not theoretical ones.  It should be concurrent with AAMAS to encourage cross-pollination between theory and practice.  I know this has been tried before at AAMAS.  We have to figure out how to encourage practitioners to attend.  So … what do practitioners want from AAMAS?  I assert many want tools, techniques and information that they can apply fairly quickly and easily.  Sure, practitioners are probably also on the lookout for a few longer-term, “big ideas”.  But most of the material needs to be as readily accessible and as ready to apply as possible.  For example, downloadable toolkits and sample test datasets.

Just my two cents.  What do you think?

 

Financial Calculus

StochasticCalculusVol1I recently experienced my own “deflate-gate” (today is Super Bowl Sunday).  I’ve always enjoyed mathematics and was a math major in college.  So I confess, I have an (over?) inflated sense of my math skills.  I was shocked to learn there was a whole different kind of calculus that I never knew even existed — stochastic calculus.

So I’ve been reading Steven Shreve’s books on Stochastic Calculus for Finance, volumes 1 and 2.  This topic is certainly not for everyone, but if the topic interests you at all, this is a great introduction.  I tried reading An Introduction to the Mathematics of Financial Derivatives three years ago, but it was just too dense and too poorly described to be of any real use, and I gave up.  Recently, I decided to try again—this time with Shreve’s book.  It is the contrast between these two books that really makes me appreciate the wonderful job Shreve has done explaining this topic.

I’m interested in financial topics generally because of my job of applying IBM’s Watson technology to the financial domain.  Not that there was a pressing need for stochastic calculus in my job, but hey, you never know how a piece of knowledge might be useful until you need it.

One example of an interesting, yet unforeseen connection, is the possibility of applying some of the financial ideas and concepts to multiagent systems.  A financial option between a buyer and a seller is similar to a conditional commitment between a debtor and a creditor.  The seller/debtor is limiting his future actions.  This represents a cost to the seller/debtor and a benefit to the buyer/creditor.  The question I’ve been recently pondering is: can I adapt the financial concepts for pricing a financial option to commitments?

As an aside, even thought Shreve covers many substantial mathematical topics, one of the best things about his books is his writing style.  The prose is amazingly light for a mathematical text.  He frequently puts the material back into context by drawing connections between topics.  This does a LOT to help the reader see where we’ve been, where we currently are, and where we’re headed.  In addition to dissecting the math, I’ve also dissected some of his prose.  There are many elements of his prose I’d like to incorporate into my own writing style.

Privacy of Local Clouds

LocalCloudUnfortunately, local clouds do not automatically protect data — they are just an enabling conceptual element.  This inevitability leads to the  “measures & counter-measures” games:  for each measure we implement, spies will implement new counter-measures.  And for every  counter-measure, we need to respond with yet a new measure.  This is a game we have to play.

Security technology has a role to play.  For example, I think there are interesting possibilities using “taint tracking” to track different kinds of information (personal information, GPS location, physical addresses, email addresses, message content, …) as it flows through a program.  I recently read Wikipedia’s Taint checking and TaintDroid: An Information Flow Tracking System for Real-Time Privacy Monitoring on Smartphones.  Taint tracking enables programs to compute what they want, but their output values are “tainted” with all the types of input included in those outputs.  Users then set up policies to control what taint types are allowed to leave their local cloud.   This is a conservative approach, in that using ANY piece of information in a calculation (say, even just the number of characters in my email address) would taint a calculation’s output (even though the length of my email says almost nothing about my email address).  Taint tracking offers much finer control of information than “can program access information X at all”.

Of course, it is still possible that a “spy node” infects your local cloud and nefariously sends data up to a data-sucking, giga-cloud in the sky (perhaps called “SkyNet”).  This possibility must also be seriously considered.  And then we have to consider various possible counter-measures.

Overall security of a giga-cloud will be better than the security of a local deca-cloud.  But — and this is a big but — there is hundred million times more data in a giga-cloud and therefore its breach is a hundred million times more valuable.  Sure, some local clouds will be hacked, but it’s not very profitable (unless you’re a celebrity who likes to take nude selfies).