I had an interesting day today following the live-tweets from the SAS Inside Intelligence Analyst Event. There were some very interesting tweets that came along, like this one from Ray Wang (Enterprise Analyst with the Altimeter Group, and the most prolific tweeter for the event with around 15% of the total tweets):
Finally, I thought, organizations are starting to understand the value of data and that we can begin to use it for strategic needs. Then Dan Vesset (IDC Analyst and author of a terrific paper entitled Decision Management: A Strategy for Organizationwide Decision Support and Automation – must be an IDC customer or pay to read it) tweeted what I consider best news from the Event:
Now I started to get excited — we are finally getting to the point where systems can make decisions, look at the data and make sense of it and not only recommend or report on the data, but actually make the decision and maybe, just maybe even act on it. Ah, the possibilities — all those years of Star Trek and Star Wars finally coming to fruition!
As a big proponent of automation for organizations to truly leverage technology and data management, my head was spinning — could it be possible? Are we really that close to making something like this happen?
Later in the day, I caught a tweet from Venessa Miemis (Futurist, Student, Amazing Brain, and the writer behind the very famous and well read emergent by design blog) that talked to a different (yet similar) reality:
A different opinion indeed.
This got me thinking: do we need Sensemakers, people who can make sense of the data — or can we trust the systems to make sense and make the decisions for us?
I had a conversation via Twitter with Venessa about this, but there is only so much you can do with just 140 characters at the time. I told her I would write this post to explain my positions further.
Here we go.
I fully believe that there are three factors standing in the way of Sensemakers as Venessa tweeted:
- Scalability – There are around 6.5 Billion of us in this planet, and we are growing towards 9 Billion in the next ten years. Too much information that needs to be processed to those many people. Sure, the counter argument would suggest, with those many more people you can have more Sensemakers — thus you can feed the needs of more and more people. That would be true if Sensemakers were easy to find, train, and deploy. As it was pointed out to me in discussions in my previous post, we still don’t know very well the type of people we need to analyze the information — how can we expect of have more of them? To me the model is not scalable and thus would not fit the purpose. To be fair, Venessa feels that this big-box thinking is what got us in trouble before — so why try again? Well, for starters…
- Globalization – We are no longer limited to the information in our near-and-dear communities. The local, small-town mentality that most of us had (yes, even in corporations) has recently been replaced by a global perspective. This is a big world (before you say Duh!, please read on) and to feed the knowledge needs of a global world you need a global mentality. Human beings are nurtured in local groups and communities; we are not global in actions or thoughts. The ability to think global is not innate, and it is not easy to do for one person. Finding, training, and deploying that person – in addition to being a Sensemaker – becomes an almost impossible task. Now multiply that by 6.5 Billion people or so. Computer systems can handle the magnitude of this need, human beings can only say “Huh?”. Further, globalization has also brought the issue of…
- Complexity and Volume of Information – Raise your hand if you don’t feel overwhelmed by knowledge and information coming at you (OK, the funny person who raised their hand can now put it down). The sheer magnitude of data, knowledge and information is mind-blowing. Add to that the complexity level of the information we receive and you get an idea of why you feel so overwhelmed. Now, you have to find the potential Sensemakers to take that complex information, make sense and connect the dots and then communicate it and explain it to the people who need it. Wanna apply for that job? Me neither.
What I do want is to use computers and sytems designed to handle very complex, very large data sets and put them to work the right way. We saw in the last few months the launch of machines so fast and powerful that my old Ti-99/4A seems like a — well even my phone is 100s times faster and more powerful than my old computer. Why not leverage those systems for what they are supposed to do? Take large to gigantic data sets, organize them, make sense of them, and then act on it.
To me, this is the way we are moving in the next five to ten years. This is the reality I want to build towards, what I see as our future.
Wanna join me? Why not? What do you think is a better way to handle these demands and needs? let me know your thoughts, would love to know what you are thinking…
24 Replies to “Oh, the Dilemma! People or Systems?”
This a great post. It is fun, novel and a bit of change for you, I like it. This is closer to a ‘day in the life’ life stream sorta post. When streams of information come flying at us at a break neck pace, we feel the need to react to it at the same pace. This is a comment I made last evening to Venessa, after your conversation.
The problem for me in this scenario is that I typically take a two pronged approach to ‘making sense’ of ideas, questions or information. I have a visceral or ‘gut’ reaction which sometimes I share (not always a good idea BTW) and then the more thoughtful reaction. Here is the problem – the amount of information prevents both types and I typically only get to the visceral reaction. This gives others a view into your thought process but may not allow for the best results in the long term.
So, I do like the idea of the “Sensemaker” thanks for volunteering, I look forward to being able to call upon you when needed. I can do that because I trust you. Here is the problem, for the general audience: Trust. We both saw it yesterday, a Tweet of a statistic based on both a flawed survey and then further a sensationalized interpretation of ‘results’. Others saw it as well.
Sensemaker is a powerful position. I am willing to trust you. But for everyone else, maybe a computer is the best person for the job.
.-= Mitch Lieberman´s last blog ..mjayliebs: @RGambhir – Thanks for the mention, I appreciate it! =-.
This is the reason people cannot be relied on — almost a week for a reply???? a computer would’ve done it in a shorter time.
trust and reputation are most certainly the achilles heel of the social world. the more social we become, the more that trust and reputation matters… and the more that computers can take over and analyze loads of information without bias. right?
at least that is the way i see it.
so, i want to understand what kind of decisions you’re talking about when you refer to wanting to be able to rely on machines to aggregate data, make a decision, and act on it. i’m a little apprehensive about your excitement to turn over your agency to a machine. (but again, i’m not clear as to the type of decisions you’re talking about). i think it’ll be amazing when we can have more comprehensive pictures painted for us via technology, but i would still think decisions would be made and acted upon by people. i want to say more, but i’ll wait to hear examples of the type of decisions you mean.
p.s. Scalability of Sensemakers – a thought just occurred to me as i was about to click submit. what if there were a few million Sensemakers? now again, i’m thinking of the idea i’ve been developing about a comprehensive reputation/influence/strengths assessment (http://emergentbydesign.com/2010/02/21/tapping-the-network-to-facilitate-innovation/) combined with some kind of ‘cognitive authority’ tool that would determine the level of expertise/trust you command….. what if those millions of sensemakers’ analyses and opinions were aggregated by the machines, and then displayed alongside other tools for decision-making? then you kind of get the best of the best from the human side, and also the data analysis from the machines?
this method also takes into account your other points of Globalization & Complexity, imo
Welcome to my humble abode — er blog. Thanks for the comment!
(yes, I am very late in replying — not intentionally just beyond busy writing more good stuff for you guys to tear apart — i mean comment on).
two parts: excellent question on what type of decisions. I am certain that we can find that just about any decision can eventually be made by a machine given sufficient parameters and data. The problem is not the type of decision to be made, but the availability of programmable rules and constraints to make that decision, or the existence of sufficient data to remove ambiguity (to a certain degree) from the decision. Similar as with human decision makers, at some point more data and rules won’t make a different and you have to pull the trigger and execute (I had this discussion with my wife maybe 50 times in the last two months alone!). I believe, and don’t have hard data to back it up yet, that these decisions are better made by machines without the human feelings of uncertainty, fear, and other emotions in the way. Not only that, but machines can execute without double-guessing. Would I trust anything to a machine today? no, not yet — but we are on our way to that. I am not going to say 200 years into the future, or even 100– my horizon is in the next 10-15 years. Before you refute it as impossible, keep in mind that 15 years ago the internet was only accessible via AOL and its friends, and only via dial-up (well, I had an ISDN line, but I was spoiled).
Second part, I like the concept of Sensemakers, don’t get me wrong, but I am very concerned with believing and relying on a human-machine interface to do the work. And, of course, here comes the trust/reputation portion again (as Mitch pointed out above). Yes, trust is a sine-qua-non requirement to rely on humans (machines see trust different, mostly as an outcome of good and / or bad programming, not as a matter of belief and reputation).
I find the idea of using sensemakers as intermediaries between machine output and as machine input very intriguing… crazy enough to work… it would also reduce the amount of work you have to do before the first analysis, and it fits in with years-old model of “secret customer service” where humans complement machines to provide better. faster, more accurate automated service (yes, automation is big with me).
thanks for an excellent comment, looking forward to more interactions.
People are systems, we are filtering our value offering and thus it’s not about replacing it with technology rather aligning our value offerings with technology to create context…
as humans we can’t escape “nonsense” but technology and machine can, thus what’s happening here is technology if it had a voice would say” i can only make sense once you’ve made sense”
we are inputing data into a system, but what data? the date i believe is what Venessa is working on, human capital, and inductive thinking and filtering that into a technology.
It is a very interesting proposition, and I agree wholeheartedly that what we input into the systems is what will eventually produce the specific outputs we are seeking. Human filtering is done by the programming of the filters and ‘noise-reduction’ elements in the systems, there is no doubt. And it is not going away.
I don’t think that we can ever replace that layer, but I am very comfortable entering those filters before the processing of data and letting the system handle the rest on its own. I don’t need a sensemaker to interpret what the systems produced, I need someone to tell the systems how to interpret.
That is the single step reserved for the humans and one that – at least for now – cannot be replaced. It is obviously a matter of perspective, but I think that systems are ready to handle that load.
Thanks for the read!
Another fine article! I started to write a long comment and then decided to post my response in my own blog. Here are a few highlights though:
<blockquoteWhat I want from computers is the ability to capture data and the tools that allow me to help analyze it. Pattern recognition is important and often times hard to see within the raw data so the tools need to have the ability to help clear the data clutter.
Oops. Here’s the link to my full article response.
Thanks for the read, and glad to initiate the conversation at your site. I left you a comment, I think you are taking a very interesting position and something I have had experience with — where to place the human in the equation (similar to what I was telling Spiro).
To me the human predates the systems, making an symbiotic relationship — i left a couple of examples of models in your post.
From a systems analysis perspective, people are part of the system and system doesn’t necessarily mean technology.
That said, I agree with Venessa that we need a better idea of the types of decisions you are thinking of automating. If the decision is simply: “this ticket is out of SLA compliance, it must be escalated” then OK. But how complex are the decisions you expect technology to make?
I am not sure i understand why certain topics would be proper and others may not. It sounds like we could not trust systems with certain complex matters? would their decisions be wrong? how does complexity affect sheer computing power? The systems provide the brute force, the human predates that force with the logic parameters. Complexity may men more logic parameters, but not necessarily more complexity for the system (probably more time).
One thing that has struck me through this (delayed due to my fault) conversation has been people’s reluctance to trust the system with complexity. I personally trusts us less with any decision where our biases enter the picture than I don’t trust “them”.
If you want to make the case that we may not know the entire logic parameters we need to use in complex systems, I can listen to that and somewhat agree. But it is out failing, and we need to work on that. Eventually we are going to be staring at both simple and complex decisions that will require extensive processing and will have to be handled by powerful cyber-entities.
We are falling behind if we cannot predate the power with logic. Systems can make sense if we provide the logical constraints, but they cannot operate without logical constraints.
Thanks for the read!
Hi Esteban and others,
Excellent article and great comments, but I don’t really see the dilemma.
Why not use systems (technology and procedures) to aggregate, filter and structure information, and make recommendations that can be analyzed and/or verified by Sensemakers? This would combine the best of two worlds: raw computing power to help us manage the unmanageable and recommend decisions, and human judgment to actually make the decisions.
To me, the power of analytics doesn’t lie in automating easy decisions. I think its -power- lies in helping us retrieve and use information that would otherwise remain buried in an unmanageable large amount of data. The real -value- of analytical engines lies with the people using them.
I don’t think we can do without human judgment and expertise any day soon. After all, analytics should not be just about gaining insights and making decisions; it should be about helping us define and carry out concrete actions for improvement. The translation of insights and recommendations into concrete actions will be influenced by many variables, especially in large business contexts. I just don’t see automatic systems beating us at such complex tasks in the near future.
Thanks for taking this further, Esteban! Really interesting.
.-= Christophe Van Bael´s last blog ..vanbael: Bridging the Gap Between Social Media Hype and Business Value http://tinyurl.com/yd94ufl @mjayliebs #crm #scrm =-.
I agree with you on the power of analytics lying in helping us manage the reams of data and to spot patterns we would otherwise miss. But I am not entirely sold on the idea of sensemakers due to bias and lack of sufficient talent to implement the concept.
To me the power of automation lies in exactly what it does: give it marching orders (logical rules, bounds and parameters to constraint the processing) and let them go and do what they do best.
We are underestimating what we built, we can trust computers more than science-fiction writers tell us to. They can, indeed they do, provide the power to replace the most cumbersome tasks. I don’t think the humans are replaceable – the input i expressed above is always needed in analytics and processing – but I do think we are scared to test what we can do and what we can obtain from releasing raw processing power into our daily lives.
I am biased by a vision of automated processing and decision making where humans dictate the logic for the decision making. I think, as I said above, that removing the biases we inherently carry from the equation would make for better decisions. Sure, setting the constraints and parameters for operation may be hard and we may not be there yet. But instead of investing the time seeing how we can make sense of processed data after is is processed (and potentially requiring several passes to get a good result) — wouldn’t we be better off if we just let the systems do what they do better and we focus on what we should be doing better?
a perfect symbiotic relationship…
Thanks for the great conversation!
Christophe is right on the money. I think the answer to this is a hybrid of better technology and better trained sensemakers, if but for one crucial factor: any time you have decisions in business, you aslo need accountability. Tipping the scales too far toward the technology suggests that, when decisions are made that don’t work for the business, your IT people and your technology vendors are to blame for it. I don’t think we’ll see a CEO get up at a shareholders’ meeting after a catastrophic year and say, “my technology vendor made me do it!” The more crucial a decision, the less appropriate it is to allow the buck to stop with a piece of technology.
That said, the many, many simple decisions a business must make on a daily basis – if mapped out properly – can be automated and thus simplify the facts a decision maker needs to have in order to get a handle on the vast amount of information he or she must base decisions on. The expertise to make decisions around sets of complicated facts, which may be unique to a market, industry or company, must still reside in someone’s brain and be guided by their better judgement.
I can sense that I am touching a nerve with complex transactions, but I don’t think it is because we don’t think that systems cannot create the right answer. I think it is because we are afraid that the complexity of the logical constraints we have to provide the systems to make the decisions are above what we can produce.
Are there decisions where human bias and sentiments should play a role? yes, definitely. But not in business. Biz is biz, not an emotional event. We need to be able to constrain complex problems with a specific set of logical parameters to allow the systems to make the decisions. We may not have the ability to do that, yet, but that does not mean the systems cannot do it.
I do believe, very strongly, that removing the human bias from business decisions will work better in the long run (few initial bumps at the beginning). And, no — it won’t turn into “Brazil” (the movie, not the country). The driving directives of the logical constraints shall (almost wrote will, bad choice of word) not allow for ‘evil’ to be the reason for the decision. If we can trust Google to not be evil, why not everyone else?
Thanks for the great conversation…
One of my favorite topics, and one that I actually pondered a lot way back in my R&D days (early 90’s) when I was working on ‘hybrid’ approaches to decision management, linking stuff like neural nets, rule-based systems and good old fashioned people power, and then again in the early 2000’s when I helped Fair Isaac define the ‘Enterprise Decision Management’ or ‘EDM’ market. With this in mind I definitely am in Christophe’s camp.
An interesting twist if we agree that people are part of these ‘Hybrid’ systems – as Jody points out, is looking at the role of social networking and collaboration tools in this equation (there you go Esteban, brought us back to our regular topic!). If folks saw IBM’s Project Vulcan announcement back in Jan, this is one example of the general idea. See that announcement here:
Another effort which aims to “mashup” collaboration and other social flavors with workgroup BI is being created by Lyzasoft (disclaimer, one of my clients). I’ve been doing some solutions and market definition of this emerging space with them and you can download my initial take on how this may all come together here:
Click to access Evoke_CRM_-_Collaborative_BI_White_Paper.pdf
Looking forward to reading more comments – great stuff!
I am glad you see it my way 🙂
Seriously, I am glad you provide the link to the Lyzasoft WP, it actually is a step in the right direction in my opinion.
Maybe I am guilty in this debate of not providing a context to my comment (see, already failing at predating the logical processing by not providing sufficient parameters). Do I think this is a 2-3 year event? Absolutely not, we are talking at least 10-12 years, maybe even more than that.
So, why, as people would nicely ask, are we having this debate? Because, as you define in your WP and I have been saying, I am not about removing the human — I am about changing the order and having the human logic predate the processing and finding of the answer. It takes more than 12 months to build these systems, more like 12 years, so these are decisions we must make now: what is the role of the human in the hybrid model of the future?
You had some interesting ideas in your WP, and so have most of the other people who commented. Alas, why is it so hard to remove humans from decision making and simply let us tell the systems what the constraints are?
That is my advocacy point: let us become the logical constraints to the model, but not the model.
Thanks for a great comment!
Hi, interesting post and discussion, thanks. Sense-makers, pattern recognition, decision making, leading to action – you are talking about the most ancient communication technology invented by mankind – storytelling. As in those who can go “Once upon a time…” face-to-face. It is scalable by the way, if you are talking about technology. It is not scalable as an artistic community act.
If you are interested peek into this post about masterful storytellers – most of us not even aware of the market value of our phenomenal ability.
Even without knowing what kind of decisions and recommendations you are looking at, if talking about complexity – storytellers are the best sense-makers available.
The bigger issue in this case, the way I sense it, is about trust. Will CEOs trust people more than they trust robots? hard to tell and it seems robots are more favorable. Maybe because they are dead-end when it comes to conversation and blame…
.-= Limor Shiponi´s last blog ..Avatar and the story in storytelling =-.
I really appreciate the way you frame the problem, and the solution, and I agree that storytellers are an interesting twist.
However, I cannot see replacing one with the other. I do agree that storytellers are a great way to provide a simple explanation to a complex topic, and I do favor storytelling over most anything else in this world when it comes to having those conversations — the power of parable cannot be replaced with most anything else. However, I don’t think that storytelling replaces decision making.
I think your question is key: as we talk about business, will CEOs trust systems to make the decisions? i think that the answer is it depends, they would if they can enter the logical constraints to bound the processing. That is my main argument here, and i still think that it is what scares us in this dialog.
Now, if we can figure out a way to take the stories told, unravel the logic boundaries, and use that as input — now we are talking! that would be an ideal world… humans can explain to the machines via a story what the decision making entails, and the machine can then make he right, rational, logical, decision based on those parameters. And humans can then use that decision to tell even better stories.
Maybe too far ahead…
Thanks for adding a great wrinkle to this conversation.
Interesting – I’m kind of “in the middle” here….while I believe we need much better tools and systems to help us, I also believe that AI will never (at least in my lifetime) challenge the human brain.
I think we all have to become sensemakers (at least all knowledge workers), and more emphasis will be on personal knowledge management and filtering, where we all digest different information, we all filter it personally, and then we share different parts with different people….
My 2 cents on information overload, filtering and pkm:
very interesting post you linked to. thanks for that.
I am somewhat with you on the idea of AI replacing humans, but I still would like to shoot for that as a goal. I have been reading and playing with AI for some 20+ years (remember The Fifth Generation?) and I have seen steady progress, but very, very slow progress.
I think we are going to end up inundated with data very soon (someone said in some article I read recently that Google and Yahoo process upwards of tens of petabytes daily — petabytes!) and we need to speed up the process of using the systems to help us manage the tsunami of data, information, and knowledge.
This is the goal of my writing, helping us see the problem of massive data inputs and no other way to process it than via massive computer power.
Else, we will not be able to make required decisions in real time as we want to do.
Thanks for the link and the read!
Comments are closed.