This conversation is centered around the public’s trust in the Big Tech companies and outlooks for the future with the public trust in new advanced technologies. It is inspired by the recently released new AI-powered chatbot by OpenAI, ChatGPT. With Facebook abusing customer data and others profiting from targeted advertisements, the general public distrusts these companies. How will this sentiment influence the public sphere of new and advanced technology like ChatGPT? What does this imply about the future of bitcoin and blockchain?
![](https://static.wixstatic.com/media/9167d1_a69ad68777df46af9dac96a90b9c543b~mv2.jpg/v1/fill/w_400,h_400,al_c,q_80,enc_avif,quality_auto/9167d1_a69ad68777df46af9dac96a90b9c543b~mv2.jpg)
"...when people look at a new piece of technology, they would conclude that it is bad for society, but it's good for themselves." - Justin Hendrix
Justin is the CEO and editor of Tech Policy Press, a startup nonprofit media and community venture that seeks to advance and influence the public discourse on the relationship between technology and democracy. He is also my instructor for INFO 5330 Technology, Media, and Democracy, a class that’s part of the Tech, Media, and Democracy initiative, where five New York City universities partner to defend independent media and journalism.
Me: Hi, Justin! Thanks for giving me this opportunity to interview you. Our class inspired me to research how much users trust new technologies when they’re using them. Based on the graph below, we can see that the like social media company have a relatively higher distrust rate than others like Apple, which is also a big tech giant. What are your thoughts on that?
![](https://static.wixstatic.com/media/9167d1_1dfbd3c3b01f41f0adf7dab640bb834c~mv2.png/v1/fill/w_980,h_657,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/9167d1_1dfbd3c3b01f41f0adf7dab640bb834c~mv2.png)
Justin: Yeah, I'm just looking at this graph, and there are so many different ways of looking at trust in corporations and the degree to which people trust or do not trust certain companies. This is a really good poll.
![](https://static.wixstatic.com/media/9167d1_3bcf113f3f1341c587785c26d8be39ec~mv2.jpg/v1/fill/w_980,h_551,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/9167d1_3bcf113f3f1341c587785c26d8be39ec~mv2.jpg)
There's another one by the Knight Foundation and Gallup that has done some interesting work as well on trust in media and technology, where they talked about a benchmark for measuring trust, bias, and media diets.
Me: Oh, interesting!
Justin: Their work is based on a very large 50,000-person survey and looked at many different questions about people's concerns with the tech firms. And it's worth looking at as a kind of benchmark of why in the United States people are concerned about the role of the internet and technology platforms. When you look through some of these kinds of key questions, there are underlying questions like do the companies themselves create more problems than they solve?
Do they contribute to challenges to democracy or social cohesion?
Do people have concerns that their particular political interests are essentially harmed by the way that these companies behave?
Are people concerned about the outsized power of the platforms, both economically and politically?
Are they concerned about the economic implications of tech firms so the degree to which they may have a fear they lose their job?
Maybe some other aspect of their life becomes more difficult because of the way that tech is advancing.
Me: These are very important questions.
Justin: Thousands of sub-questions can be examined. To understand the question, you have to dig down from the surface. But in general, I think the thing that most people tend to kind of align on across parties, perspectives, and demographics, is the idea that these firms have too much power. They simply have too much power in society, and there is little to hold them to account.
Me: I found that very interesting and agree that there aren’t enough restrictions for these companies right now. If we look at the graph above, the hardware company doesn’t receive as much of a bad rap compared to social media giants. Places like Apple have all of our facial recognition and fingerprints, but they aren't being outed like the others. What do you think might be the reason behind that?
Justin: There are probably a few reasons: I do think that these issues are complicated and that people are aware of what is in the news and the kind of discourse regularly to some extent. For the last few years, companies like Facebook have drawn so much fire that it has been distracted from broader questions about privacy.
![](https://static.wixstatic.com/media/9167d1_1dfbd3c3b01f41f0adf7dab640bb834c~mv2.png/v1/fill/w_980,h_657,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/9167d1_1dfbd3c3b01f41f0adf7dab640bb834c~mv2.png)
This polling data showed this concern, where the flow of data about the use of all manner of technology in a non-social media context is overlooked by law enforcement or by the government. Instead, the hottest topic is the kind of content moderation questions. So, think about what's dominated the press recently - should politicians be allowed on a platform? Is this particular form of speech hate speech? Should these actors’ actions on a platform be allowed or not? It's this type of argument around speech and content moderation. These questions have sucked up a lot of the oxygen that should have been for core concerns around privacy. The privacy questions simply take a little bit of a backseat.
![](https://static.wixstatic.com/media/9167d1_ee29ee52ee46495588ea9c7e8551e5cd~mv2.png/v1/fill/w_432,h_432,al_c,q_85,enc_avif,quality_auto/9167d1_ee29ee52ee46495588ea9c7e8551e5cd~mv2.png)
Me: Let’s shift gears a bit. I'm sure you have been following how popular chatGPT has been. As we have been talking about it just now - the public doesn't trust tech companies sometimes but this tool has been really popular. It's almost indicating that the public does trust a tool like this. Why is that?
Justin: Well, so there's a phenomenon in media and technology studies, when people look at a new piece of technology, they would conclude that it is bad for society, but it's good for themselves. And that's true of a lot of things when people recognize the dangers or the potential downside of a lot of different technologies, but they see utility to themselves and they will continue to use it. So if you ask people about some of these polls and ask them, is Facebook good for the world or good for America? No. However, if you dig into their views on whether it's good for them, the answer is yes. And I can't remember quite what that phrase is called…
Me: Technological determinism!
(Technological determinism is a reductionist theory that assumes that a society's technology progresses by following its internal logic of efficiency while determining the development of the social structure and cultural values.)
Justin: Right! So, people extrapolate to a social scale and recognize the danger. Yet at the individual scale, they recognize the utility. Same thing for ChatGPT. I mean, people are clearly concerned about it. In fact, I just had a round-up in tech policy press a few weeks ago. Let me see if I can find this for you.
Me: Thank you! I will give it a read.
Justin: There is another poll from Monmouth (“Mon-muth”) University Poll…
Six in ten (60%) Americans have heard about this product and 72% believe there will be a time when entire news articles will be written by artificial intelligence.…only 1 in 10 (9%) Americans believe computer scientists’ ability to develop AI would do more good than harm to society.…(73%) Americans feel that machines with the ability to think for themselves would hurt jobs and the economy. Also, a majority (56%) say that artificially intelligent machines would hurt humans’ overall quality of life…
Me: Interesting…
Justin: Right? So I mean, there's a lot of concern out there. And yet again, it's one of these things that - is ChatGPT interesting to play with? Does it do a good job of summarizing this transcript? is it potentially very useful in my job? Does it provide me with useful answers to questions that I asked? Am I potentially willing to learn to get around some of its limitations to appreciate its utility? The answer is yes. Right. So again, it's a problem of technological determinism.
Me: Yeah, it looks like there are lots of extensions for the challenges of technological determinism. What about Machine Learning? I found it interesting that even some of our tech-savvy students at Cornell Tech sometimes refer to machine learning as magic; however, people still display lots of trust in this mysterious concept.
Justin: I think it’s true that we as a culture accept a kind of technologically deterministic point of view in the idea that technology will advance. We must advance the technology and, figure out how to deal with the consequences of the downside because this is ultimately the best way forward for the species. So even if there are systems that we don't entirely understand or that the consequences of using technology in society or in the economy are unclear, we have to keep building. We could have a longer conversation about that whether that's good or bad. But I think that that is just a bedrock assumption. Most people aren't even ever aware of that assumption or even querying that assumption. So it would never occur to them to ask the question: machine learning, should we or shouldn't we? Right?
Me: That is a good point.
![](https://static.wixstatic.com/media/9167d1_c7bf842b30bf4641962a11bdb5126580~mv2.webp/v1/fill/w_687,h_561,al_c,q_85,enc_avif,quality_auto/9167d1_c7bf842b30bf4641962a11bdb5126580~mv2.webp)
Justin: I think sometimes that's also baked into the words we use. I once heard someone say machine learning was being more accurately described as machine guessing. That's technically true. These large language models are not conversing with us. They're just prediction engines, right? They are just bullshit generators. They are just predicting the next word, based on a string of prompts. Obviously, that's a slight oversimplification of what's going on, but at the same time, it's not a sentient, a creature or anything having its own consciousness. It's really just a mechanism to predict words. I feel like a lot of what is going on in this space right now contributes to the fact that there are underlying assumptions that we do not query and there are patterns of talking about and thinking about technology that is fundamentally wrong. And yet we do not query those thoughts either. We go straight to just accepting the common parlance.
Me: There is definitely so much to think about there. I think there is something I'm personally more worried about: Does having advanced technology make us lazier? People are less likely to think when they're more used to just having things handed to them.
Justin: I don't know. I mean, this is a really hard question to answer. I just got off the phone with somebody saying, “look at the power of these language models”. Thinking about it like this, if there is a poor country in the world, and they have very poor health care - too few doctors, not enough nurses, and cannot construct facilities or distribute information out to people efficiently. Then the ability of a citizen to ask a medical question of a system powered by language models and to get a response over text is incredible. The possibility for citizens to have access to no information to some information in a scenario like this could be very powerful and very useful. That's an incredible opportunity and something that can truly change the lives of millions or even possibly billions of people. So, in this case, you wouldn't think of that in terms of people being lazy, right?
Me: Yes, that’s very different from people sitting at home scrolling through Tiktok.
![](https://static.wixstatic.com/media/9167d1_bf314feddd864c7aaf6a3d6ba16ee615~mv2.png/v1/fill/w_850,h_826,al_c,q_90,enc_avif,quality_auto/9167d1_bf314feddd864c7aaf6a3d6ba16ee615~mv2.png)
Justin: Right. Right now, some people just don’t have that access to information. On the other hand, in a rich world scenario, where people have there are a lot more resources and abundance. Will people stop learning certain fundamentals because the machine does it for them all the time? I don't know. I suppose we can make those arguments but I fear that’s where the argument starts to enter a moral panic phase. People have always said that about technologies - we are going to make people lazy because we have this new cotton loom and people don't have to weave themselves. Or similarly, the masses will be used to all this manufactured food and so they will just get fat and obese. Things might get out of control. Does that happen? Yes. On the other hand, have we improved the situation for many others? Yes. I don't know if there is an answer to this question. These are complicated things.
Me: We will end here on this profound and complicated question. Thank you so much for your time!
Justin: Thank you!
Comments