jump to navigation

Podcast: Search 2.0 – From Search to Discover by Yezdi Lashkari May 1, 2008

Posted by John in Furrier Podcasts, Technology.
Tags: , , , , , , ,
add a comment

Yezdi Lashkari outlines the origins and limitations of collaborative filtering, the importance of Web 2.0, and how the commoditization of certain specific web technologies will benefit both consumers and businesses alike. He addresses the importance of blending algorithms to effectively harness collective user behavior, and the wisdom of crowds.

Yezdi Lashkari was a co-founder of Firefly Networks (acquired by Microsoft), a pioneering company in the area of collaborative filtering and personalization.  Lashkari recently left Microsoft, where he played a number of senior product leadership roles, the last being a special assignment sponsored directly by CEO Steven Ballmer, focused on researching large scale network-centric computing infrastructures for thousands of hosts.  This work is now driving one of the technical pillars of the post-Vista Windows release.  Lashkari holds numerous patents in collaborative filtering, data protection and user profiling technologies.  He received his M.S. from the MIT Media Laboratory and has three computer science degrees covering research areas ranging from artificial intelligence, databases, to collaborative filtering and personalization.

Enjoy the podcast sponsored by Aggregate Knowledge – Leader in Web 2.0 Discovery Technology

Yezdi and I talk about the big trend in Search or Search 2.0 – and it has nothing to do with search as we know it today.

The full transcript from the Interview is here.

Podcast: Discovery Series – Recommendations 2.0 by John Riedl, Ph.D February 20, 2008

Posted by John in Technology.
Tags: , , , , ,
add a comment

In this podcast I spoke with John Riedl, author of Word of Mouse: The Marketing Power of Collaborative Filtering. He discusses the evolution of recommendation systems and lessons learned from his experience at Net Perceptions. He outlines how the technology has evolved from difficult and expensive system to deploy, to simple and effective. Now, virtually any online businesses can have an efficient, low-risk way to integrate discovery into their marketing and merchandising decision-making. Riedl explains why just employing a keyword search on a site isn’t enough in the Web 2.0 world.

 

Full transcript here:

John Riedl was co-founder and Chief Scientist for Net Perceptions, an early leader in online personalization technology. Riedl is currently a professor in the computer science department at the University of Minnesota where his research includes the GroupLens Project one of the most famous collaborative filtering and recommendation research groups in the world. In 1999, Riedl and other Net Perceptions co-founders shared the MIT Sloan School’s award for E-Commerce Technology.
We’re here with John Riedl, professor at The University of Minnesota, Department of Computer Science. John, welcome to the podcast on the Discovery Series. Tell us about who you are, and what you’re doing at the university and some of the projects that you’re working.

Sure; I’m a professor in the computer science department and I work on the GroupLens project, which is a sort of overarching program of research that explores the applications of computer technology, broadly, to help people find the information, products, and services that they’re most interested in, especially on the Internet.
What are some of the things that you’re seeing that you’re focusing your research on, some of the big trends?

Well, the biggest thing I see in the Web 2.0 age is that it’s all about letting people contribute content to the Web and then letting other people comment, discuss, rate, review that content so that the stuff that we all see is really put together by other people like us. I mean, in a way of thinking about it, I see Web 2.0 as a democratization of the editorial process. In a sense.

The original web let anybody who wanted be able to write things, and the problem was that there were still a very limited number of people who got to choose which of the things that were written or produced (for instance, as an audio podcast) which of those things were viewed by other people, and in the 2.0 era, we use technologies like rating, reviewing, tagging to help other people find the stuff that they’ll be most interested in from all of the things that are available on the Web.

You’re one of the founders of GroupLens, you mentioned, perhaps one of the most famous collaborative filtering and recommendation research groups in the world. Tell me how the group got started.

Well, sure, I can remember exactly because it was a very exciting moment for me. In 1992, I was sitting with a friend, Paul Resnick, in a talk at the Computer Supportive Cooperative Work conference, and as we were sitting listening to that talk, we realized that the researcher was envisioning a world in which the most important things that everybody would produce and consume for the economy of the world were going to be information items.

And it was really kind of a cool vision, but there were two things missing from the vision that he described. The first one is where were we going to actually get enough food to eat in this world where all anyone ever did was information. And I don’t know a solution to that one; so I won’t address it further.

But the second one is that we realized that this was really the way the world went. There was going to be a terrible challenge for people to pick out, of all this available information, the stuff that they individually were most interested in. We saw that the technology he imagined, that he was hoping would solve that problem, was basically an artificial intelligence agent that would read the newspaper for you every morning and then clip out the articles that you would be most interested in.

Well, we realized that that technology was not going to be ready nearly in time to enable this new world of information exchange. And so we started thinking well, what technologies could make it possible? And we came up with the idea that one really powerful concept would be to use computers to aggregate the ideas, the thoughts, the values, the evaluations of humans – rather than try to use computers to directly make these decisions, we would use computers to leverage human decisions in making – in helping make the information decisions that they wanted.

There’s a cute little story from Artificial Intelligence that I think really puts that in perspective. Some people say that Artificial Intelligence is the idea of having computers do badly what humans do well, and what we wanted to do was turn that on its head. Paul and I wanted to say well, what if we let humans make these value judgments that we humans are so good at and we just use computers to do the statistical analysis, an aggregation of all of those opinions, to then add value to other users? In a sense we were having computers do well what computers do well, and humans do well what humans do well.

Talk about how you see the AI software environment evolving. I mean, is that where software ultimately is going to get to?

Well, you know, over the long-term I have a very open mind about where AI is going and I just finished reading Ray Kurzweil’s book, The Singularity Is Near about his vision of a world where the AIs are all smarter than we humans, and I’ll tell you, I find that vision compelling; I think that really, human brains are limited by their biology and by the processes of evolution and ultimately we’re going to get brains and silicon that are stronger, more powerful, bigger, better, faster than human brains.

The one question I have is when that ultimately is going to happen. And I’ll say that in my view there’s this terrible danger, which you can see throughout the history, of AI of always expecting it to be 10 years away.

And ever since I can remember in my life, 20 years as a computer scientist, is people have always been saying well, in just 10 years we’re going to have that, or some people say in just 20 years we’re going to have computers smarter than humans, and as far as I can tell, we’re not really making rapid progress towards that goal.

But there is another goal, in some ways technically less ambitious goal, but I think equally ambitious in terms of social impact, which is to say what we’re going to try to do is build computers that are going to amplify the ability of humans or computer programs that are going to amplify the abilities of humans.

So, for instance, when people come to Amazon nowadays and they get all those cool recommendations for stuff to buy, they are getting exposed to a computer experience that is very much amplified over what they could do individually. But the way it’s amplified is Amazon is collecting lots of information from people all over the world about what products they like to buy and is leveraging that information with some very clever computer algorithms to make suggestions about things that you might want to buy. That’s a great example of a computer program that amplifies human abilities.

It’s taking what we’ve got now, which is computers that can deal with terabytes of data reasonably, rapidly, can present us user interfaces that we can understand and take advantage of, but that certainly don’t have the ability to do human value judgments or human understanding of documents, pictures, audio/video.

And yet humans are great at that stuff; we find it really easy to look at a movie and say whether we like it or not, and so collaborative filtering is the idea of taking all of that information and using some algorithm that some people would call artificial intelligence algorithms, some people would not, you know, there’s this other danger in artificial intelligence, which is the field has been around for many decades now and it has done some tremendous advances in human understanding, but some people, every time we understand something they say well, that can’t be artificial intelligence, that’s just a computer program.

And so I think there’s this real danger of saying that anything we understand is not anything important and I reject that. I think that these contributions are – from artificial intelligence as a discipline – enormously valuable; they’re just wonderful. And we should just accept them for what they are, which are great ways of making humans even more effective at the things that we try to do.
You’ve been involved with recommendations as CTO of Net Perceptions, one of the first recommendation of companies. What are the issues around some of these new navigation techniques?

Well, the thing that I think is really cool is that recommenders have gone beyonda technology that, when we founded Net Perceptions, we thought was mostly going to help people find information that they would find valuable. Frankly that was a failure for our company. We would go on a sales call and we’d say “Hey, I’ve got a technology that can double the number of times that a user will come back to your site because they’re just going to love the information they find.”And the guys on the site would say, “I absolutely believe you can do that, but I can’t afford to double the volume of traffic on my site because all it’ll help me do is lose money faster. Because I’m not making any money yet from those page views.”

And now we’re in an era of the Internet where people have finally figured out how to monotize page views, I think led by Google; I mean Google has just been enormously successful at that.

Oh, by the way, to put in a plug for a different way of seeing Google, remember that Google is a company that has been one of the leaders of this idea of leveraging human abilities through computer algorithms. I mean PageRank, the heart of the Google search algorithm is fundamentally a relatively simple piece of computer science algorithm applied over all of the decisions of the millions of people who’ve contributed to making up the Web. Right? What Google looks for is which pages have the most links to them.

Well, that’s, in a sense, an expression by those people of their confidence in the site that they linked to. And that’s why PageRank is such a wonderful algorithm.

Well, they extrapolated human behavior and put it into an algorithm that could scale and provide in essence a metric for users.

Exactly, and it’s just an explosive breakthrough; I mean, it has really changed the way the whole world works. You think about the kind of information that you can just sit down and type into a Google search and, boom, you get an answer to something where 20 years ago it would have been literally hours in the library. When we look at the types of productivity improvements that we’re seeing in information workers, I think one of the reasons is because of their amplification by these early AI technologies that have now been applied to the masses of data that are available on the Internet.
You recently became an advisor to Aggregate Knowledge. What about their approach was interesting to you?

Well, there are two things that Aggregate Knowledge has done that I think are really exciting. One of them is they’ve taken the basic recommender technology and they’ve built sort of a 2.0 version of that technology. They use all of the state of the art stuff and just get it right and build it in such a way that any website in the world that wants to explore how to apply this technology in new ways just has to take the Aggregate Knowledge–their access to the Aggregate Knowledge engine–and add a small piece of this Web 2.0 JavaScript to their site and they enable new ways of viewing the knowledge and the information, the products, the articles, on their sites that is absolutely exciting and terrific.

I think the thing that’s going on there is that Aggregate Knowledge is agnostic about how people use the recommendations; they see themselves as providing a state of the art, world class recommendation engine and then providing very simple APIs to let their customers leverage those engines in any way that they want.

Users want navigation but they also want good search too. They’re not mutually exclusive but they are separate in theory. So how does social networking and collaborative filtering fit into this in their approach that’s different?

Yeah, that’s a fascinating question. I mean, how do search and browse relate? I would say that in general we’re in an era in which search has just dominated. Google has just turned search into the dominant paradigm in the information world. I do think that we’re going to see a rebalancing of that over time. I don’t know that I’m right about that; that’s speculation, but I think one of the things–whether or not I’m right about the rebalancing towards browse or not–one of the things that I’m certain of is that we’re going to see increasingly that search that is only information-based, that’s only based on things like the keywords that are in the documents, is going to be a failure in the Web 2.0 world.

And the reason for that is that people need searches that also have some concept of equality or the fit of the documents that are returned. And that’s the cool thing about the Aggregate Knowledge technology is that you can implement that in a website where somebody can do a search and get back a set of results. Then by just plugging this little JavaScript widget into the result list the website can get a set of scores presented to the user in innovative interfaces that let users look at search results not just by which searches were relevant to the query but which searches are personally relevant to them as an individual user of the website.

But at the end of the day, for the people who have these big sites who want to take advantage of these kind of technologies, it’s a deployment issue. I mean didn’t that cause a lot of the early firms to kind of not scale?

Yeah, that’s exactly right. I mean one of the challenges in that perception in the early days was that we could go out to a company and make the case that we could literally make them millions of dollars with our recommendations over the technologies that they were using in-house. And yet we still couldn’t convince them in some cases to do a deployment because it would take months of work from their IT department to do their deployment and the thing that’s really exciting – you know Web 2.0 has sort of two characters to it and one of the characters we’ve talked about already – socialization of the Web.

And the other character is this deep use of simple technologies, , JavaScript and Ajax –,to make it really easy to build interfaces that are much more interactive than the original Web 1.0 interfaces. And that’s the thing that I think is very exciting about the recommendation 2.0 world is that you’ve gotten to a place where in order to make recommendations into a website; all you have to do is build a dictionary of the products that you have in your catalog and then export that dictionary and then build a simple JavaScript widget into the pages where you want the recommendations to appear and voila you’re done, you’ve got everything you needed.
It pushes the complexity really into the cloud or into the network where any technology you put big iron and clusters of servers out there and Google is doing this now with this big kind of massive computing cloud and Amazon has their utility model, S3, EC and now payment so the trend is that the heavy lifting gets done on the network, not on the
sites –

Exactly, it’s like in the early days if you wanted to run an Internet business you had to figure out how you were going to have a dozen servers all around the country and then Akamai showed up and they just commodotized that problem. All you had to do was pop up with your server and your service and sign up with Akamai and off you went and that’s what Aggregate Knowledge is trying to do now for recommendations. They’re saying hey, you want to do recommendations, great, that is our competency, why don’t you outsource that and you put your effort into building the best products and services and user experience that you possibly can.
So final question: what are some of the current projects that you’re working on at GroupLens that have you most excited?

Well, one of the things I’m really interested in is some explorations that one of my Ph.D. students, Shilad Sen has been leading where he’s been exploring the impacts of tagging on how users behave on a website. And one of the things he’s looking at particularly is how recommenders interact with tagging systems.

So the question is if you have a typical user tagging system, people just start applying tags to the site kind of randomly then over time you’ll see the vocabulary that’s being used on the site diverge and become less and less relevant to the users of the site.

Well, one of the ideas is that maybe a recommender could detect that, it could sort of watch the vocabulary as its emerging and it could make sure that the tagging system itself applied some pressure to tagging users to try to encourage them to use tags that were the most valuable tags for those users.

So is it building taxonomy?

Exactly. What it’s doing is it’s watching the user’s tag and it’s trying to understand the emerging taxonomy of the tags and then sort of encourage other users by the recommendations that it makes to fit their tags into that taxonomy. Now it never requires it – like all tagging systems, any user can apply any tag they want. I remember I saw a tag a little while ago, my favorite tag ever, it was “great music to listen to while drinking tequilas and driving in a convertible to Mexico.”

What a tag! I mean how much more precise do you get? So you could always add that tag if you want, but what the recommender does is it sort of encourages you to use tags that are a better fit for the community.
So what it really is doing is in essence setting a linguistic architecture so that when people come in, they can be more specific by having different diverse tag sets out there around cluster and content.

Exactly.
That’s amazing. And then that scales with the machine learning; essentially it’s the machine-learning environment, right?

Exactly, it scales with the people and with the machine learning algorithms that you use.

Wow that’s exciting. Anything coming onto the network that we can play with at all? Or people out there who were interested?

That’s right, yeah; we’d love to have you come visit our research site. This is totally not for profit, just a National Science Foundation supported research site; it’s called http://www.movielens.org and please come. You can see our tagging features, you can see a bunch of other community features that you’re exploring, and if any of your listeners would like to see the published papers that underlie our work they’re all available on our website at http://www.grouplens.org.
John thanks so much for the chat and I’ve always loved talking about computer science and some of the innovations out there and Web 2.0, modern web and recommendations and group theory and group research and group algorithms – great stuff. Thanks so much for taking the time.

Well, thank you. I couldn’t agree more. It’s just a wonderful time to be alive.

Discovery Series Podcast: The Paradox of Choice – Barry Schwartz February 14, 2008

Posted by John in Furrier Podcasts.
Tags: , ,
4 comments

I had a chance to interview Barry Schwartz who is a guru on strategies of discovery. This is the first of a series of Podcasts on Search and Discovery. I enjoyed the conversation. Thanks to Aggregate Knowledge for introducing me to Barry.

Barry Schwartz, award-winning author of The Paradox of Choice: How More is Less, explains how the increasing demand for options actually decreases consumer confidence and satisfaction: people can be overwhelmed with too many choices. In this podcast, Schwartz discusses his research and how it can apply in the online world. He gives practical advice for retailers on how they can delight their customers and be successful in the online world.

Barry Schwartz is the Dorwin Cartwright Professor of Social Theory and Social Action at Swarthmore College. He is the author of several books, including The Battle for Human Nature: The Science, Morality and Modern Life and The Costs of Living: How Market Freedom Erodes the Best Things in Life. His articles have appeared in many of the leading journals in his field including the American Psychologist.

Full Transcript Here:

We’re here with Barry Schwartz, the author of The Paradox of Choice. Barry, welcome to the podcast. Tell us a little bit about yourself and your background, your academic background, and your book, Paradox of Choice.

I got my Ph.D. in psychology from The University of Pennsylvania in 1971 and I have been teaching at Swarthmore College ever since. For 20 years, I have been interested in how well the assumptions that economists make about human nature actually fit with what we know about human nature, The answer to that, by the way, is not very well.

In the last few years I got interested in a specific [assumption], which is the idea that the more choice people have the better off they are. [This idea] is deeply embedded in economic theory, and there’s evidence that’s accumulated in the last few years that that assumption is also false… that you can give people too many choices and it screws them up in various ways. My book, The Paradox of Choice, is a summary of that evidence and an argument about why people can be overwhelmed with too many options.
Talk about The Paradox of Choice in an offline world, and that’s mainly where all your research has been driven on in terms of your examples. Give us an example of some of the things that happen quite frequently.

Well, in an offline world, you’ve got 175 salad dressings in the supermarket, 250 kinds of cereal. You go to Circuit City and you can construct six million different stereo systems out of the components on hand. You look in the newspaper and there are 10,000 mutual funds – stocks – to choose from. There is no less than 30 kinds of dental floss. There’s simply no area of life where people don’t have an extraordinarily large number of options in the offline world.

And, again, the assumption was that adding options will make people better off. If you’re alternating between Corn Flakes and Rice Krispies and I introduce Sugar Frosted Flakes, if you don’t care, you can ignore it, and if you do care I’ve just made your life better. That’s been the operative assumption. So in the offline world, you can’t walk into a store without experiencing this.
What does your research show in terms of the consumer?

Well, there are three different effects that having too much choice produces. I should emphasize, most of this work has been done in labs, not in real retail environments, although some work has been done in real life environments. The first effect is with all these options to choose from, people end up choosing none. They simply pass.

So if you’re a retailer what that means is that you actually sell less. People just don’t know how to choose, they walk out and they throw their hands up in dismay, they walk out empty handed. That’s one problem.

Second effect is that if people overcome this indecision and paralysis and choose, they may choose badly. And here’s what I mean. If you’re trying to buy a stereo system, let’s say, there are certain things about it that are easy to evaluate and other things that are hard to evaluate. Sound quality, fidelity, loudness, those are more difficult than, say, physical appearance. If you have a lot of options to choose from, it’s just not possible to do the difficult evaluation of each of the features that matter. So what you’re likely to do is simplify the task and choose on the basis of what’s easy to evaluate, even if that’s not what’s so important.

So you might choose by brand and price and ignore all those wonderful subtle features that producers of these goods take so much trouble to create. Now if it happens that all you care about is the simple things like brand and price, you’re not harmed by this simplifying but if you actually care about other things, you’ll end up making a less than optimal choice.

So first is paralysis, the second is worse decisions.

The third thing, which is in some ways the most surprising, is if you overcome paralysis and manage to choose, and you manage to choose well, you’ll be less satisfied with what you’ve chosen if you’ve chosen from a large set than if you’ve chosen from a small one.

So even though you do well, you end up disappointed and of course you’re likely to blame that disappointment on the thing you’ve chosen and not on your own psychological processes, and the reason for this is that it’s very unlikely that there’s one stereo or one cereal or one place to go on vacation or one restaurant that is in every respect the best. So you have to make trade-offs. And if you’ve considered a lot of options and you say “No” to most of them, you’re saying “No” to a lot of really attractive features of the other options.

And even if you make the right choice, you end up thinking about all those other wonderful things you’ve passed up, and that makes your choice less satisfying.So in the marketing world, they talk about cognitive dissonance. In a way, more choice creates more of that.

It does, except that if cognitive dissonance worked, upon making your choice to go to the Bahamas for example, you would immediately forget about all the alternatives or convince yourself that the alternatives really sucked and there really was only one option. That’s what we’re supposedly doing all the time to reduce the dissonance that we experience when we have a tough, conflicted decision. But I believe that doesn’t work when there are lots of options to choose from.

Or if it does work, it works very temporarily. I mean, in the long run you are sitting on the beach in the Bahamas thinking about how great it would be in the Rockies.
How have you looked at some of the things that you’ve discovered on the offline side. Can you share some of your thoughts on the online choice issue?

Absolutely. Now I haven’t done any research in online settings. I would love to be able to; still hoping to be able to find the right partner. In theory, I think online is both better and worse than offline.

It’s worse because the number of options available for you to consider is essentially infinite. Everything is there and the next source of alternatives is just one mouse click away.

So you could spend your whole life looking for a stereo if you were of a mind to do that. That’s what’s bad about it. You know, if you’re going retail, eventually you just say I’m tired of getting into my car and driving across town; I’m just going to pick one from this store, I’ve been to five already but it doesn’t take any work to go to five, 10, 20 or 50 websites. So that’s why it’s worse.

The respect in which it’s better is that the online retailer has the tools to very much customize what the individual visitor gets to experience. So even if you’ve got thousands of stereo systems available, you don’t necessarily have to show them all to each customer. If you ask the right questions in advance, you can turn your super department store into a boutique. I get to see five, you get to see five, and they’re not the same five. And so we are choosing from a small set, not a large set, thanks to the clever way in which you’ve organized your site.

So I think that we need retailers to think of themselves much more as agents or curators rather than simply providers if they want their customers not to be paralyzed.So group one sees, say, 20 suggestions on a website. And group two sees five suggestions….

Right – well, our prediction is straightforward. If you show me 20 things, I’m less likely to choose any than if you show me five… For pretty much any category. No one has done that experiment to my knowledge, but that’s what I would expect to happen.
Some of the things that Aggregate Knowledge is doing, (such as) user behavior collected in an aggregate way, — the human collective becomes a filter. Is that something that can solve the paradox?

I think it can; it really depends. From my understanding, for example, of how Netflix works – you get DVD recommendations based on what you’ve chosen in the past; how much you’ve liked what you’ve chosen; and how much other people who have chosen the same things liked these recommendations. So [Netflix] can give you a list that is as long as the equator or they can give you the ten things that, based on your past behavior, you’re likely to find satisfying.

I think that’s a terrific way to solve the choice problem, and if you’re the kind of person who doesn’t want to be told what you’re supposed to like, well then, all 80 million titles are available for you to inspect. Not everybody will think this is the Holy Grail right now, coming up with the best system, best algorithm, for making recommendations to people that are individualized. The knowledge problem has been solved, mostly (thanks to Google) and now people realize that once you make every piece of the world’s knowledge available you’ve created a new problem. Which is– making it useable. And that means managing it, organizing it, editing it, filtering it, and so on.

And there is not yet a Google equivalent, at least in the retail space that I’m aware of.
So are you planning any tests with Aggregate Knowledge to test the paradox at all?

I’m hoping to. I’m hoping – I mean, the ideal form of a test is to go to some highly trafficked website and create alternative visions that, for example, either vary how many options people see or vary the way in which those options are organized. You can organize them hierarchically so that it doesn’t look like such a long list. And for a day, a few hours, depending on how much traffic the site gets, show random people one or another version, and then measure the things that are so easy to measure online: how much time people spend and what the conversion rate is. And you can ask them how satisfied they are with the site, whether they would recommend this site– you could even ask them further downstream how satisfied they are with whatever they bought.

So that would be my ideal. Now most of the things I’d want to measure, I suspect, are already being measured by these various sites.Yeah, the collective, anonymous group model, which basically drives discovery around consumer behavior–identifying patterns around anonymous data–is less profiling and more of the “wisdom of crowds.”

Well, it can be anonymous. I have other interests because I think the choice problem is worse for some people than it is for others. It would be nice, for example, to give people the scale that I’ve developed that distinguishes people who are out for the best, I call them “maximizers,” from people who are just out for good enough. My prediction is that maximizers will be especially plagued by large choice sets. So then people wouldn’t be quite anonymous because you’d give them a little survey before they even started, and then sort them on that basis.
In your book, Paradox of Choice, you talk about “satisficers and maximizers.” Talk about the difference between the two types of people.

A “Maximizer” wants “the best” – the best vacation place, the best restaurant, the best stereo, best job. And in order to find the best, you need to evaluate every possibility, otherwise how do you know it’s the best? And of course the whole point is that with a trillion choices out there, examining every possibility is just not possible. It produces a sort of paralysis of indecision, exhaustion, huge amount of time spent choosing and then finally you pull the trigger and you pick something and there are still dozens of things you haven’t considered, so you imagine that you could have done better if you’d looked a little longer.

A “Satisficer,” by contrast, is only looking for something that is good enough. And your standards could be low or they could be high. You can be very discerning or not. The point is you know what you’re looking for and as soon as you find something that meets your standards, you just choose it and you don’t worry about what else is out there. So you don’t need to do an exhaustive search. That means a large number of alternatives may not be nearly as much of a problem for you.

And what we find is that people differ in this regard. People who score high on this maximizing scale take more time to choose, look around at what other people are choosing, and are, in general, less satisfied with what they choose, and less satisfied with their lives than people who score low on this maximizing scale.
In the online world I think a lot of retailers get caught-up on metrics like time on site…A Maximizer would spend more time on a site – the other category they want to get out of there as fast as possible.

Well, they might want to get out of there as fast as possible; you know, if you’re a Maximizer and you’re looking to buy a $25 toaster and you figure this will take five minutes online and you come down four hours later…

On the one hand, you know, it’s not worth four hours to choose a $25 toaster – on the other hand, damnit, you want the best toaster.It’s a conflict. Do you think you’ll be able, in an online setting, to tell the difference between a Satisficer and a Maximizer?

Well it’s easy enough to tell the difference just by giving them the questionnaire–it’s just a 10-12 item survey and there’s no reason to think they’d be any more serious about it online than they are when they’re sitting in a room.

The question is how does their score on this measure correlate with other things that are of interest like: how much time they spend on the site, how much and how often they buy, how satisfied they are with what they buy, and so on.
But many people in the tech business talk about algorithms and data mining. How important are the human factors in making good suggestions like product placement or number of items?

Well, I think that both things matter. If you don’t have the wherewithal to do an assessment case by case and individualize, tailor your site to each customer, then knowing in the aggregate what produces satisfaction and what produces conversions, you just design your site around the aggregate (behavior data). It ought to be possible to take advantage of what you call the human factors and give me the right site for me and you the right site for you. There’s no doubt that people differ from one another substantially and you can’t rearrange the shelves of your brick and mortar store for each customer that walks in, but you ought to be able to rearrange your shelves of your virtual store.

What I’m saying is that if retailers took the time and trouble to do that and they did it well, they would not only sell more but they’d have much more satisfied users.
You mentioned that you’re looking to work with folks online like Aggregate Knowledge to test some of your theories. What are some of the tools that you’re looking for?

Well, there are various things that have been studied as psychological characteristics that all ought to have an impact on how people use sites and how satisfied they are with their use. One of them is this scale that I developed with my colleagues, “Maximizing.” Another one is a scale that measures what’s called “tolerance for ambiguity.” Some people just can’t handle conflict, and of course the more information you give them, the more conflicted they are, so you might expect if you gave people that scale it would predict how they use a site and how satisfied they are with it.

There’s another scale that distinguishes people who are kind of approach-oriented – they are looking for something good from people who are avoidance-oriented. They’re trying to prevent something bad. So you’re a pessimist and your main mission is to prevent a catastrophe versus you’re an optimist and your main mission is actually to get something that will be satisfying. That’s a dimension on which people differ and that might also have an impact on how people use these web spaces.

So I’d love to – again, in an ideal world – get profiles of some of these users by giving them some of these scales and then track their behavior and see which of these psychological dimensions can predict the things that the retailers care about. Of course the retailers would be interested in this because it might help them design their sites more effectively. I’m interested in it because it’s a way of connecting these psychological variables to real decisions that people are making.
Right now sites rely heavily on the search bar as a crutch. Aggregate Knowledge talks about this concept of discovery–finding that unexpected result versus typing in a keyword. How was discovery in The Paradox of Choice related?

I think that Aggregate Knowledge is right about the kick people get out of discovery. I’m blocking on the name, it’s a very large big box store that’s based in Seattle – it’s mostly groceries. People leaving that store are more satisfied than people leaving pretty much any other ordinary traditional retail store and there seem to be three reasons: One is that the prices are good, not quite as good as Wal-Mart, but very good. The second that selection is limited unlike other places– there aren’t 100 kinds of toilet paper, just a few. And the third is that they always have surprises available. Even though it’s kind of a bargain store, you may walk in there one day and see that they’re offering expensive smoked salmon. At a very reasonable price.

They get a deal, they buy it, they offer it to customers, a week later it may not be there. And so people do their regular shopping getting the stuff they know they need and also expecting that they will encounter a few things that are going to be pleasant surprises, little treats, little luxuries. And people walk out of there with a smile on their face.

And it’s very rare for people to walk out of shopping experiences with smiles on their faces these days.
Well that’s what Aggregate Knowledge is talking about– the idea that here’s the surprise for you, that we’ve done work for you.

That’s exactly right and I think that if you could pull that off, you’d win real loyal customers and the more complicated the choice environment is, I suspect, the more satisfying it is to people to have these pleasant surprises.

So final question on that note, five years from now on the online side, knowing what you know with The Paradox of Choice, what is the environment going to look like?

My best guess to what it’s going to look like is a lot of boutiques, both in the retail side and even in the information side. People are not going to want sites where you get everything; they’re going to want sites where you can get just the things they’re looking for.

And the trick will be to figure out how to organize what you have, what you know, and what you make available so that for each and every person who comes, it’s a little mom and pop store. And just the right mom and pop store. I think that if something like that doesn’t happen, then it’s just going to implode. People are going to throw up their hands and conclude that it’s just not worth the work. “I’ll just call my friend and ask her.” So I think that we need sites to develop that act as if they are our friends, know us well enough to make the choice of appropriate relevant and manageable.
Barry Schwartz, the author of Paradox of Choice: Why More Is Less. Thank you so much and good luck with your new book.

Follow

Get every new post delivered to your Inbox.