PL EN
Social Media in the Future: Under the Sign of Unicorn...
 
More details
Hide details
1
Zakład Teorii Kultury, Instytut Filozofii i Socjologii PAN, Polska
 
2
Wydział Nauk Społecznych, Katolicki Uniwersytet Lubelski, Polska
 
 
Submission date: 2022-02-15
 
 
Final revision date: 2022-04-08
 
 
Acceptance date: 2022-04-10
 
 
Publication date: 2022-06-30
 
 
Corresponding author
Urszula Jarecka   

Zakład Teorii Kultury, Instytut Filozofii i Socjologii PAN, Nowy Swiat 72, 00-330, Warszawa, Polska
 
 
Studia Humanistyczne AGH 2022;21(2):61-79
 
KEYWORDS
TOPICS
ABSTRACT
In this essay case studies pointing to problems related to the use of AI in shaping the virtual world are discussed. AI algorithms helps to shape and control the conventional web behaviour and speech of today’s media users, mostly teenagers and adults. Considering the development of software, social media may constitute a separate virtual world in the future. AI also shapes the image of this world and human relationships. The essay begins with an analysis of the future of social media against the background of truth; later, case studies show problems caused by AI to media users and the community. The authors attempt to answer questions such as: What kind of attitudes and abilities will be shaped in social media? What network ethics does AI dictate? What kind of attitudes and thinking will be promoted in the social media of the future?
 
REFERENCES (47)
1.
Bettman, D., Oksanowicz, P. (2021). Technologiczne magnolie. Gdy większość z nas uwierzy, że dzięki technologiom zmienimy świat na lepsze. Warszawa: Wydawnictwo Naukowe PWN.
 
2.
Bratu, S. (2017). The inexorable shift towards an increasingly hostile cyberspace environment: The adverse social impact of online trolling behavior. Contemporary Readings in Law and Social Justice, 9, 2, 88–94.
 
3.
Brougham D., Haar, J. (2018). Smart technology, Artificial Intelligence, robotics, and algorithms (STARA): Employees’ perceptions of our future workplace. Journal of Management & Organization, 24, 2, 239–257.
 
4.
Business Wire. (2017). $5.43 Billion Consumer Robot Market 2017  – Industry Trends, Opportunities and Forecasts to 2023. Retrieved from: Business Wire, https://www.businesswire.com/n... [2.02.2022].
 
5.
Castelo, N., Bos, M.W., Lehmann, D.R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56, 5: 809–825.
 
6.
Clark, A. (2003). Natural-born Cyborgs: Minds, technologies, and the future of human intelligence. New York: Oxford University Press.
 
7.
Copp D. (2020). Just too different: normative properties and natural properties. Philosophy Studies, 177, 263–286.
 
8.
Davenport, T., Guha, A., Grewal, D., Bressgott, T. (2020). How Artificial Intelligence will change the future of marketing. Journal of the Academy of Marketing Science, 48, 1, 24–42.
 
9.
Diamandis, P.H., Kotler, S. (2020). The future is faster than you think. How converging technologies are transforming business, industries and our lives. New York: Simon & Schuster.
 
10.
Dietvorst, B.J., Simmons, J.P., Massey, C. (2015). Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144, 1, 114–126.
 
11.
Duingnan, P., Gann, L.H. (1995). Political correctness. A critique. Redwood City, CA: Stanford University Press.
 
12.
Dunbar, R. (2010). How many friends foes one person need?: Dunbar’s number and other evolutionary quirks. Cambridge, MA: Harvard University Press.
 
13.
Eco, U. (2004). On beauty. Translated from the Italian by A. McEwen. London: Seeker & Warburg.
 
14.
Fortuna, P. (2021). Optimum. Idea cyberpsychologii pozytywnej. Warszawa: Wydawnictwo Naukowe PWN.
 
15.
Fortuna, P., Gorbaniuk, O. (2022). What is behind the buzzword for experts and laymen: Representation of “Artificial Intelligence” in the IT-professionals’ and non-professionals’ minds. Europe’s Journal of Psychology, 18(2), 217–218.
 
16.
Fromm, E. (1955). The sane society. Greenwich: Fawcett Publications, Inc.
 
17.
Gadamer, H.-G. (2000), Rozum, słowo, dzieje. Szkice wybrane. Translated by M. Łukasiewicz, K. Michalski. Warszawa: Wydawnictwo Naukowe PWN.
 
18.
Gardner, H. (2008). Five minds for the future. Boston, MA: Harvard Business Press.
 
19.
Germann, M. (2018). Delegated investing and the role of investment outcomes. Quality, and Trust. Retrieved from: Universitätbibliothek Manheim, https://madoc.bib.uni-mannheim... [4.03.2021].
 
20.
Gladden, M.E. (2016). Posthuman management. Indianapolis: Synthypnion Press.
 
21.
Head, A.J., Fister, B., MacMillan, M. (2020). Information literacy in the age of algorithms. Study report. Harvard: Knight Foundation-Harvard Graduate School.
 
22.
Hjorth, L., Hinton, S. (2019). Understanding social media. London: Sage.
 
23.
Ho, A. (2019). Deep ethical learning: taking the interplay of human and Artificial Intelligence seriously. Hastings Center Report, 49, 1, 36–39.
 
24.
Inbar, Y., Cone, J., Gilovich, T. (2010). People’s intuitions about intuitive insight and intuitive choice. Journal of Personality and Social Psychology, 99, 2, 232–247.
 
25.
Kirsten, M. (2019). Algorithms raise questions about ethics and accountability. MIS Quarterly Executive, 6, 129–142.
 
26.
Kramer, A.D.I., Guillory, J.E., Hancock, J.T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. PNAS, June 17, 2014, 111 (24), 8788– 8790; first published June 2, 4014.
 
27.
LeCun, Y., Bengio, Y., Hinton, G. (2015). Deep learning. Nature, 521, 436–444.
 
28.
Lem, S. (1988). Filozofia przypadku. Kraków: Wydawnictwo Literackie.
 
29.
Levinson, P. (2013). New media. Boston: Pearson.
 
30.
Liang, X., Straub, J. (2021). Deceptive online content detection using only message characteristics and a machine learning trained expert system. Sensors, 21, 7083.
 
31.
Longoni, C., Bonezzi, A., Morewedge, C.K. (2019). Resistance to medical Artificial Intelligence. Journal of Consumer Research, 46, 629–650.
 
32.
Lovink, G. (2016). Social media abyss. Critical Internet cultures and the force of negation. Cambridge: Polity.
 
33.
MacAvaney, S., Yao, H-R., Yang, E., Russell, K., Goharian, N., Frieder, O. (2019). Hate speech detection: Challenges and solutions. PLoS ONE, 14, 8, e0221152.
 
34.
Martin, K. (2019). Designing ethical algorithms. MIS Quarterly Executive, 18, 2, 129–142.
 
35.
Metzler, A., Scheithauer, H. (2015). Adolescent self-presentation on Facebook and its impact on self-esteem. International Journal of Developmental Science, 3–4, 135–145.
 
36.
Minsky, M. (1996). The Emotion Machine [draft]. Retrieved from: MIT Media Lab, https://web.media.mit.edu/~min... [4.02.2022].
 
37.
Motycka, A. (2005). Rozum i intuicja w nauce. Zbiór rozpraw i szkiców filozoficznych. Warszawa: Eneteia. Wydawnictwo Psychologii i Kultury.
 
38.
Nisbett, R.E. (2003). The geography of thought: how Asians and Westerners think differently… and why. New York: Free Press.
 
39.
Schank, R. (2020). What is AI? Journal of Artificial General Intelligence, 11, 2, 89–90.
 
40.
Spitzer, M. (2016). Cyberchoroby. Jak cyfrowe życie rujnuje nasze zdrowie. Translated from German M. Guzowska. Słupsk: Wydawnictwo Dobra Literatura.
 
41.
Svedholm, A.M., Lindeman, M. (2013). The separate roles of the reflective mind and involuntary inhibitory control in gatekeeping paranormal beliefs and the underlying intuitive confusions. British Journal of Psychology, 104, 303–319.
 
42.
Thomée, S. Härenstam, A., Hagberg, M. (2011). Mobile phone use and stress, sleep disturbances, and symptoms of depression among young adults  – a prospective cohort study. BMC Public Health, 11, 66. Doi: 10.1186/1471-2458-11-66.
 
43.
Troyer, R. (2012). English in the Thai linguistic netscape. World Englishes, 31, 1, 93–112.
 
44.
Vaidhyanathan, S. (2018). Antisocial media. How Facebook disconnects us and undermines democracy. New York: Oxford University Press.
 
45.
Verma, M. (2014). Editorial expression of concern: experimental evidence of massive-scale emotional contagion through social networks. PNAS, July 22, 111(29), 10779 [first published July 3, 2014].
 
46.
Walsh, T. (2017). It’s Alive!: Artificial Intelligence from the logic piano to killer robots. Collingwood: Schwartz.
 
47.
Youyou, W., Kosinski, M., Stillwell, D. (2015). Computer-based personality judgments are more accurate than those made by humans. Proceedings of the National Academy of Sciences, 112, 4, 1036–1040.
 
eISSN:2300-7109
Journals System - logo
Scroll to top