Biometric Digital Privacy Protection behavior for deep fake Apps Users among Egyptian University Students

Author

Radio and Television Instructor , Faculty of Media ,Beni Suef University, Egypt

Abstract

In light of the spread of many deep learning techniques and apps, electronic apps have appeared that require the collection of a huge amount of Biometric digital data from its users, one of the most prominent of those apps recently appeared-and depending on deep learning technology is the Deepfake Apps, which depend on creating fake photos and videos that humans cannot distinguish from the original photos and videos such as DeepFakeLab, Face Swap Live, Zao, Reface.
The importance of the study is clear due to the lake of Arab media studies dealing with the employing of artificial intelligence techniques and the new phenomena in Biometric digital media such as Deepfake Apps and the danger of these apps is that they require the collection and storage of different (biometric) characteristics, such as the face, iris, fingerprint which hacks the Biometric Digital Privacy of those apps users. 
The current study aim to identify the behavior of biometric digital privacy protection for deep fake Apps Users among Egyptian University Students, it also aim to identify the usage rates of the study sample for those apps according to demographic variables, type of phone variable, the university and the nature of residence and to reveal the differences between them in the behavior of (biometric) digital privacy protection.  
In this context, the (biometric) privacy protection motivation scale was applied on a convenience sample of 390 individuals from Egyptian (public-private) university students in (rural-urban) areas according to the Protection Motivation Theory PMT, which has been used in several studies to explore the behaviors of digital privacy online, the term “the (biometric) digital privacy protection behaviors refers to the procedures are used to avoid the stealing of sensitive personal and vital information.
The findings of the study indicated that there were statistically significant differences among students are using deep fake Apps and non-users of Apps in the five sub-dimensions of the scale (perceived intensity, perceived susceptibility to influence, self-efficacy, response effectiveness, and rewards) and the overall score on the biometric protection behavior scale.

Keywords


Acquisti, A., Brandimarte, L., & Loewenstein, G. (2020). Secrets and likes: the drive for privacy and the difficulty of achieving it in the digital age. Journal of Consumer Psychology, 30(4), 736-758
Agarwal, S., Farid, H., El-Gaaly, T., & Lim, S. N. (2020, December). Detecting deep-fake videos from appearance and behavior. In 2020 IEEE International Workshop on Information Forensics and Security (WIFS) (pp. 1-6).
Ahmed, S. R. A., & Sonuç, E. (2021). Deepfake detection using rationale-augmented convolutional neural network. Applied Nanoscience, 1-9.
Al-Othmani, Muhammad. (2021). Facial recognition technology and crime control in Arab airports. Security Policy Papers, 2(1), 1-10.
Al-Qarni, Saad. (2021). The relationship between the thinking style and the dissemination of privacy through the new social media. Journal of Media Research, 59(2), 551-600.
Ayat Kasi, Houria. (2021). Applications for tracing contacts of people infected with (Covid 19) between the need to protect public health and the risks of violating privacy. Revue critique de droit et sciences politiques, 16(1), 39-61
Berghoff, C., Neu, M., & von Twickel, A. (2021). The Interplay of AI and Biometrics: Challenges and Opportunities. Computer, 54(09), 80-85.‏
Bin Nasser Al-Nasiri, Khalfan. (2019). The extent of the impact of social networks on digital rights among students of post-basic education (11-12) in schools in the Sultanate of Oman. The Educational Journal of the College of Education in Suhag, 67 (67), 436-450.
Bode, L. (2021). Deepfaking Keanu: YouTube deepfakes, platform visual effects, and the complexity of reception. Convergence, 27(4), 919-934.
 Chen, H., Beaudoin, C. E., & Hong, T. (2017). Securing online privacy: An empirical test on Internet Scam victimization, online privacy concerns, and privacy protection behaviors. Computers in Human Behavior, 70, 291-302.
Chen, M., Liao, X., & Wu, M. (2021). PulseEdit: Editing Physiological Signal in Facial Videos for Privacy Protection.‏
Chowdhury, S. A. K., & Lubna, J. I. (2020, July). Review on Deep Fake: A looming Technological Threat. In 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT) (pp. 1-7).
Chuang, Y. H., Lei, C. L., & Shiu Jr, H. (2021). How to Design a Secure Anonymous Authentication and Key Agreement Protocol for Multi-Server Environments and Prove Its Security. Symmetry, 13(9), 1629.‏
Cozma, R., & Muturi, N. (2021). It’s Not All Doom and Gloom:: Protection Motivation Theory Factors That Reverse the Negative Impact of Social Media Use on Compliance and Protective Health Behaviors. Southwestern Mass Communication Journal, 37(1).‏
Dargan, S., & Kumar, M. (2020). A comprehensive survey on the biometric recognition systems based on physiological and behavioral modalities. Expert Systems with Applications, 143, 113-114.
Delgado-Santos, P., Stragapede, G., Tolosana, R., Guest, R., Deravi, F., & Vera-Rodriguez, R. (2021). A Survey of Privacy Vulnerabilities of Mobile Device Sensors. arXiv preprint arXiv:2106.10154.
El-Madawy, Muhammad. (2018). Protecting the information privacy of the user through social networking sites - a comparative study. Journal of the College of Sharia and Law in Tanta: A Refereed Scientific Quarterly Journal, 33(4), 1926-2057.
Fletcher, J. (2018). Deepfakes, Artificial Intelligence, and Some Kind of Dystopia: The New Faces of Online Post-Fact Performance. Theatre Journal, 70(4): 455–471. Project MUSE, doi:10.1353/tj.2018.0097
Furini, M., Mirri, S., Montangero, M., & Prandi, C. (2020). Privacy Perception when Using Smartphone Applications. Mob Networks Appl., 25(3), 1055-1061.
Grindley, E. J., Zizzi, S. J., and Nasypany, A. M. (2008). Use of protectiomotivation theory, affect, and barriers to understand and predict adherence to outpatient rehabilitation. Physical Therapy. Journal of American Physical Theraphy Association, 88(12):1529–1540.
Haag, S., Siponen, M., & Liu, F. (2021). Protection motivation theory in information systems security research: A review of the past and a road map for the future. ACM SIGMIS Database: the DATABASE for Advances in Information Systems, 52(2), 25-67.‏
Hasan, H. R., & Salah, K. (2019). Combating Deepfake Videos Using Blockchain and Smart Contracts. IEEE Access, 7: 41596–41606.
Jarvis, L. (2021). Deepfake-ification: A Postdigital Aesthetics of Wrongness in Deepfakes and Theatrical Shallowfakes. Avatars, Activism and Postdigital Performance: Precarious Intermedial Identities, 89.‏
Johnston, B. A. C. and Warkentin, M. (2010). Fear Appeals and Information security Behaviors: An Empirical Study. Ministry of Education Official Website, 34(3):549–566.
Kim, A. Y., & Kim, T. S. (2016). Factors Influencing the Intention to Adopt Identity Theft Protection Services: severity vs Vulnerability. In PACIS (p. 68).
Kokolakis, S. (2017). Privacy attitudes and privacy behavior: A review of current research on the privacy paradox phenomenon. Computers & Security, 64, 122-134.
Komkova, G., Amelin, R., & Kulikova, S. (2020, May). Legal protection of personal image in digital relations: leading trends. In 6th International Conference on Social, economic, and academic leadership (ICSEAL-6-2019) (pp. 382-390). Atlantis Press.
Kwok, A. O., & Koh, S. G. (2021). Deepfake: a social construction of technology perspective. Current Issues in Tourism, 24(13), 1798-1802.
Laishram, L., Rahman, M. M., & Jung, S. K. (2021, February). Challenges and Applications of Face Deepfake. In International Workshop on Frontiers of Computer Vision (pp. 131-156). Springer, Cham.
LEE, J. Y., & Al Khaldi, N. (2020). Exploring the ethical implications of new media technologies: A survey of online platform users’ digital literacy and its effects on digital trust and privacy awareness. 1-2. Abstract from 70th Annual International Communication Association Conference (ICA 2020), Washington D.C, United States.
Li, Q., Dong, P., & Zheng, J. (2020). Enhancing the security of pattern unlock with surface EMG-based biometrics. Applied Sciences, 10(2), 541.
Litush, Delilah. (2019). Legal protection of the right to digital privacy of the electronic consumer. Journal of the Humanities, 171-179.
Mahdavifar, S., & Ghorbani, A. A. (2019). Application of deep learning to cybersecurity: A survey. Neurocomputing, 347, 149-176.
Marett, K., McNab, A. L., & Harris, R. B. (2011). Social networking websites and posting personal information: An evaluation of protection motivation theory. AIS Transactions on Human-Computer Interaction, 3(3), 170-188.
Martin, K. D., Borah, A., & Palmatier, R. W. (2017). Data privacy: Effects on customer and firm performance. Journal of Marketing, 81(1), 36-58.
MCMC (2014). Communications & Multimedia Pocket Book of Statistic.
Nemec Zlatolas, L., Welzer, T., Heriˇcko, M., and H¨olbl, M. (2015). Privacy antecedents for SNS self- disclosure: The case of Facebook. Computers in Human Behavior, 45:158–167.
Newland, M. C. (2019). An information theoretic approach to model selection: A tutorial with Monte Carlo confirmation. Perspectives on behavior science, 42(3), 583-616.
Palladino, B. E., Menesini, E., Nocentini, A., Luik, P., Naruskov, K., Ucanok, Z., & Scheithauer, H. (2017). Perceived Severity of Cyberbullying: Differences and Similarities across Four Countries. Frontiers in Psychology, 8, 1524.
Saad Ibrahim, Mohammed. (2021). The right to digital privacy in the context of the data revolution and patterns of legislative and international interventions. Journal of Media Research and Studies, 15(15), 1-40.
Salleh, N., Hussein, R., Mohamed, N., Abdul, N. S., Ahlan, A. R., and Aditiawarman, U. (2012). Examining Information Disclosure Behavior on Social Network Sites Using Protection Motivation Theory, Trust and Risk. Journal of Internet Social Networking & Virtual Communities, 2012.
Sayler, K. M., & Harris, L. A. (2020). Deep fakes and national security. Congressional Research SVC Washington United States.
Schneider, S., Fürsich, F. T., & Werner, W. (2011). Biometric methods for species recognition in Trigonia Bruguière (Bivalvia; Trigoniidae): a case study from the Upper Jurassic of Western Europe. Paläontologische Zeitschrift, 85(3), 257-267.
Sedek, M., Mahmud, R., Jalil, H. A., & Daud, S. M. (2012). Types and levels of ubiquitous technology use among ICT undergraduates. Procedia-Social and Behavioral Sciences, 64, 255-264.
 Sedik, A., Hammad, M., Abd El-Latif, A. A., El-Banby, G. M., Khalaf, A. A., Abd El-Samie, F. E., & Iliyasu, A. M. (2021). Deep Learning Modalities for Biometric Alteration Detection in 5G Networks-Based Secure Smart Cities. IEEE Access, 9, 94780-94788.
Stern, T., & Kumar, N. (2017). Examining privacy settings on online social networks: a protection motivation perspective. International Journal of Electronic Business, 13(2-3), 244-272.
Tesfagergish, S. G., Damaševičius, R., & Kapočiūtė-Dzikienė, J. (2021, September). Deep Fake Recognition in Tweets Using Text Augmentation, Word Embeddings and Deep Learning. In International Conference on Computational Science and Its Applications (pp. 523-538).
Trifiletti, E., Shamloo, S. E., Faccini, M., & Zaka, A. (2022). Psychological predictors of protective behaviours during the Covid‐19 pandemic: Theory of planned behaviour and risk perception. Journal of community & applied social psychology, 32(3), 382-397 .
Visvikis, D., Le Rest, C. C., Jaouen, V., & Hatt, M. (2019). Artificial intelligence, machine (deep) learning and radio (geno) mics: definitions and nuclear medicine imaging applications. European journal of nuclear medicine and molecular imaging, 46(13), 2630-2637.
Vithessonthi, C. (2010). Knowledge sharing, social networks and organizational transformation. The Business Review, Cambridge, 15(2):99–10
Wojewidka, J. (2020). The deepfake threat to face biometrics. Biometric Technology Today (2), 5-7
Yang, W. C., & Tsai, J. C. (2020). Deepfake Detection Based on No-Reference Image Quality Assessment (NR-IQA). Forensic Science Journal, 19(1), 29-38.
Yao, X., Zhang, L., Du, J., & Gao, L. (2021). Effect of Information-Motivation-Behavioral model based on protection motivation theory on the psychological resilience and quality of life of patients with type 2 DM. Psychiatric Quarterly, 92(1), 49-62.‏
Yavuzkiliç, S., Akhtar, Z., Sengür, A., & Siddique, K. (2021). DeepFake Face Video Detection Using Hybrid Deep Residual Networks and LSTM Architecture. In AI and Deep Learning in Biometric Security (pp. 81-104).
Yu, P., Xia, Z., Fei, J., & Lu, Y. (2021). A Survey on Deepfake Video Detection. IET Biometrics.