ProxiTok: Tiktok Security Concerns

Technical Summary

TikTok, the popular social media platform, has faced significant privacy concerns due to several factors highlighted in various reports, including the Reuters article from March 2023. This technical summary delves into the key technical aspects that contribute to these privacy issues.

In recent years, TikTok has taken the world by storm, captivating millions of users with its addictive short-form videos. However, as the platform’s popularity has grown, so have concerns regarding privacy and security. In response to these apprehensions, a new initiative called ProxiTalk has emerged, aiming to address and mitigate the security concerns associated with TikTok.

1. Data Collection and Sharing: TikTok collects extensive user data, including personal information, device details, and usage patterns. Concerns arise due to the volume and nature of data collected, which may include sensitive information. Furthermore, reports suggest that TikTok shares user data with third-party entities, potentially compromising privacy.

2. Data Transmission to Non-Local Servers: Critics have raised concerns about TikTok’s data transmission practices. User data, such as geolocation and unique device identifiers, may be sent to servers located outside the user’s country of residence. This raises questions about data sovereignty, control, and potential exposure to foreign governments.

3. Security Vulnerabilities: TikTok has faced scrutiny regarding security vulnerabilities within its app. Past reports have highlighted potential vulnerabilities that could lead to unauthorized access to user accounts or the collection of sensitive information without user consent. Such vulnerabilities undermine user privacy and raise doubts about the platform’s overall security posture.

4. Complex Privacy Policies: TikTok’s privacy policies and terms of service have been criticized for their complexity and lack of transparency. Users may find it challenging to understand how their data is being collected, used, and shared. This lack of clarity raises concerns about informed consent and whether users fully comprehend the extent of data processing activities.

Key Findings

TikTok, the popular social media platform, has faced significant privacy concerns due to several factors highlighted in various reports, including the Reuters article from March 2023. This technical summary delves into the key technical aspects that contribute to these privacy issues.

  • Data Collection and Sharing
  • Data Transmission to Non-Local Servers
  • Security Vulnerabilities
  • Complex Privacy Policies


In recent years, TikTok has emerged as one of the most popular social media platforms, captivating millions of users worldwide with its short-form videos and viral trends. However, amidst its meteoric rise, TikTok has faced significant scrutiny and raised concerns over privacy issues. This article explores the privacy concerns surrounding TikTok and sheds light on the ongoing debate about its data handling practices.

One of the primary reasons behind the privacy concerns surrounding TikTok is its Chinese ownership and the potential risks associated with it. The fear stems from the possibility that user data collected by TikTok could be accessed or shared with the Chinese government, raising questions about data privacy and national security.

TikTok collects a vast amount of user data, ranging from personal information such as names, locations, and contacts to device information and usage patterns. While data collection is common across many social media platforms, the manner in which TikTok handles and shares this data has come under scrutiny.

Concerns have been raised regarding TikTok’s data sharing practices with third-party entities, potentially compromising user privacy and Reports have suggested that user data, including geolocation and unique device identifiers, may be transmitted to servers outside the jurisdiction of the user’s country, raising questions about data sovereignty and control.

Implications and Concerns: The collaborations between ByteDance and defense universities raise several concerns:

1.Access to Research Findings: State organizations funding ByteDance’s research, such as the Beijing Academy of Artificial Intelligence and the Ministry of Public Security Technology Research Program, may have access to the company’s research findings. This raises questions about the potential use of these findings for military purposes.

2.Technological Advancements:ByteDance’s involvement in cutting-edge research with defense universities suggests the potential transfer of advanced technologies to the military sector. This could contribute to the development of AI-powered military capabilities, including surveillance, cyber operations, and information warfare.

3. Data Security and Privacy: With TikTok’s extensive user base and data collection practices, concerns arise regarding the security and privacy of user data. The association with defense-related entities raises questions about the potential misuse or unauthorized access to sensitive user information.

Critics argue that TikTok lacks transparency when it comes to its data handling practices. The app’s privacy policy and terms of service are often perceived as overly complex and difficult to comprehend for the average user. This lack of transparency raises concerns about informed consent and whether users fully understand how their data is being used and shared.

The privacy concerns surrounding TikTok have not gone unnoticed by regulators and policymakers. In some countries, including the United States, there have been calls to ban or restrict TikTok due to concerns about data privacy and national security risks. These concerns have prompted discussions about strengthening regulations related to data protection, especially in the context of foreign-owned platforms.

While TikTok has gained immense popularity, it is crucial to address the privacy concerns associated with the platform. Transparency, responsible data handling, and robust security measures should be paramount in safeguarding user privacy. Ongoing dialogue between regulators, policymakers, and TikTok’s parent company, ByteDance, is necessary to establish a framework that ensures the protection of user data without compromising the innovative and entertaining aspects that have made TikTok so popular.

In the digital age, where privacy and data security are paramount, it is crucial for users to be aware of the potential risks associated with any social media platform. By understanding the privacy concerns surrounding TikTok and advocating for stronger safeguards, users can navigate the digital landscape with greater confidence and protect their personal information.

1. Huazhong University of Science and Technology: Rated as a “very high risk” due to its defense laboratories and close ties to the defense industry. ByteDance researchers have worked with scientists from the university’s State Key Lab of Multi-spectral Image Information Processing Technology on person re-identification, a technology with military applications.

2. People’s Public Security University of China: Rated as a “very high risk” due to its affiliation with the Ministry of Public Security. ByteDance researchers collaborated with this university on deepfake technology, which has implications for security and intelligence operations.

3. Tsinghua University: Rated as a “very high risk” for its involvement in defense research and alleged cyberattacks. ByteDance collaborated with Tsinghua on quantum computing and deep neural networks, technologies with potential military applications.

4. Peking University: Rated as a “high risk” due to its involvement in defense research. ByteDance collaborated with Peking University researchers on intelligent text generation, which has implications for natural language processing and information warfare.

Also ByteDance, the parent company of TikTok, has been implicated in collaborations with military-linked institutions and research programs in China. This article explores ByteDance’s association with the Beijing Academy of Artificial Intelligence, its partnerships with defense universities, and the potential implications of these collaborations. The findings raise concerns about data access, technological advancements, and the integration of artificial intelligence (AI) into military applications.

The Beijing Academy of Artificial Intelligence and National Agenda: ByteDance’s status as a founding member of the Beijing Academy of Artificial Intelligence, established by China’s Ministry of Science and Technology and the Beijing Municipal People’s Government in 2018, is notable. This academy aims to drive AI development in China and promote a military-civilian fusion pattern. It includes prominent academic institutions like Peking University, Tsinghua University, and the Chinese Academy of Sciences. Megvii, an AI giant blacklisted by the U.S. government, is also involved. This collaborative effort aligns with China’s strategy to achieve a first-mover advantage in AI development and embed AI technologies within the field of national defense.

Collaborations with Defense Universities: ByteDance researchers have collaborated with universities that have strong links to China’s military and defense industry. These collaborations raise questions about the potential transfer of knowledge, technological advancements, and access to research findings. Some notable collaborations include:

Tiktok Risks

TikTok, the widely popular social media platform, has faced intense scrutiny due to privacy concerns and its connection to the Chinese company ByteDance. This article delves into the potential privacy violations and risks associated with TikTok, including unauthorized access to user data, data harvesting, surveillance and intelligence operations, CCP censorship influence, narrative control, and political interference. Understanding these concerns is crucial for users to make informed decisions regarding their privacy and personal information.

Privacy Violations and Unauthorized Access: There are apprehensions that TikTok could be exploited for unauthorized access to sensitive user data. Security breaches, including potential backdoors or data retrieval by China-based staff, pose risks of data theft. In December, ByteDance admitted that some China-based staff surveilled U.S. journalists and TikTok employees using the app’s geolocation function, raising concerns about privacy breaches.

Data Harvesting and Strategic Advantage: The large datasets amassed by TikTok could be leveraged by Beijing to support the Chinese Communist Party’s goals in its competition with liberal democracies. Such data could assist in the development of critical capabilities in areas like big data analysis, AI, supercomputing, and predictive modeling, with potential military and intelligence applications. The concern lies in how this data might be used to profile, analyze, and target individuals or population segments.

Surveillance and Intelligence Operations: Data collected through TikTok could be exploited for intelligence purposes, including surveillance, recruitment, manipulation, and repression. Individuals critical of Beijing or holding key positions may become targets. The information obtained could encompass compromising material, device fingerprints, or location data, raising concerns about potential privacy infringements and abuse of personal information.

Exporting CCP Censorship: There is a fear that elements of the Chinese Communist Party’s censorship preferences could be integrated into TikTok, infringing on individuals’ rights to freedom of expression. This could impact the quality of open debates and limit the range of topics and narratives allowed within the platform, potentially influencing global discourse.

Narrative Control and Manipulation: The TikTok-curated information environment holds growing significance for social and political discussions. It could be manipulated through selective promotion or demotion of certain topics, narratives, or creators, including political figures. Content moderation, manipulation of recommendation algorithms, and undisclosed interventions by staff could impact the integrity of information on the platform.

Political Interference and Disinformation: TikTok’s extensive user data and algorithmic capabilities raise concerns about potential political interference. Bad actors could exploit the platform for large-scale, coordinated campaigns involving harassment, disinformation, and astroturfing. Agitprop mobilization campaigns could manipulate geopolitical discussions, political debates, and elections, posing risks to democratic processes and public opinion.

Permission Request

Permissions typically requested by TikTok:

1. Camera Access: TikTok requires access to your device’s camera to enable video recording and uploading within the app. This permission allows you to create and share content seamlessly.

2. Microphone Access: Access to your device’s microphone is requested by TikTok to capture audio for videos. This permission enables you to add sounds, music, or your voice to the videos you create.

3. Photos and Media Files Access: TikTok requests access to your device’s photos and media files to allow you to personalize your content. This permission enables you to select and upload images or videos from your device’s gallery.

4. Location Access: By granting TikTok access to your device’s location, you can discover content tailored to your geographical area. This permission enhances the TikTok experience with location-based features. TikTok assures that your location data is used within the app and is not shared with third parties.

Also Telephony identifiers play a crucial role in modern communication systems, enabling the identification and tracking of devices connected to telecommunication networks. These identifiers, such as phone numbers or International Mobile Subscriber Identity (IMSI) numbers, serve various purposes in telecommunications and are associated with location, connection interfaces, and personal information management (PIM) data. In this article, we will explore the significance of telephony identifiers and their implications for data and privacy.

A telephony identifier is closely tied to the location of a device. Mobile network operators utilize these identifiers to establish connections and route calls and messages to the intended recipients. By analyzing the telephony identifier’s location information, service providers can offer location-based services and improve network efficiency. However, the association of telephony identifiers with location raises concerns about the potential tracking and monitoring of users’ movements, potentially compromising privacy.

Telephony identifiers are essential for establishing connections through various interfaces, including cellular networks, Voice over Internet Protocol (VoIP), or other communication technologies.

These identifiers enable seamless communication between devices and facilitate telephony services such as voice calls, messaging, and multimedia sharing. While these services enhance connectivity and communication, they also create opportunities for data interception and surveillance.

Telephony identifiers are often linked to personal information management (PIM) data, which includes contacts, calendar events, emails, and other personal data stored on devices. These identifiers enable synchronization of PIM data across multiple devices and platforms. However, the association of telephony identifiers with PIM data raises concerns about the potential exposure of sensitive personal information if the identifiers are compromised or accessed without authorization.

The use of telephony identifiers in telecommunications raises important considerations for data security and privacy. Service providers must ensure the secure handling of telephony identifiers and associated data, implementing robust encryption protocols and access controls to protect against unauthorized access or data breaches. Users should also exercise caution when sharing telephony identifiers or granting permissions to applications, as this information can be exploited for various purposes, including targeted advertising, surveillance, or identity theft.

Content Population

Online hate and extremism pose significant challenges to our digital societies. This article delves into the various categories of hate and extremism, shedding light on the prevalence of specific ideologies, symbols, and targeted communities. By analyzing the number of videos associated with each category, we aim to raise awareness about the scope and impact of online hate speech.

Promoting White Supremacy: With 312 videos, promoting white supremacy stands out as one of the most prevalent categories of online hate. This ideology perpetuates racial hierarchies and fosters discrimination, targeting marginalized communities and promoting divisive narratives.

Glorifying Extremist Individuals/Groups/Ideologies: 273 videos in this category glorify extremist individuals, groups, or ideologies. These videos amplify dangerous messages, glorify violence, and contribute to the radicalization and recruitment of individuals into extremist movements.

Extremist Symbols Embedded in Media: 150 videos feature extremist symbols embedded in media. These symbols serve as rallying points for hate groups and can incite violence or intimidation towards targeted communities.

Antisemitic Content: 153 videos spread antisemitic content, perpetuating harmful stereotypes, conspiracy theories, and promoting hatred against Jewish individuals and communities.

Anti-Black Content: With 139 videos, anti-Black content aims to demean and dehumanize Black individuals, perpetuating racism and discrimination based on skin color.

Anti-LGBTQ+ Content: 90 videos target the LGBTQ+ community, promoting prejudice, intolerance, and discriminatory attitudes towards individuals based on their sexual orientation or gender identity.

Anti-Muslim Content: 81 videos spread anti-Muslim sentiment, fostering Islamophobia, and perpetuating stereotypes and discrimination against Muslims.

COVID Conspiracies and Misinformation: 74 videos exploit COVID-19 conspiracies and misinformation to attack, threaten, or stigmatize specific individuals or groups. This category underscores the dangers of misinformation in exacerbating hate speech and discrimination.

Misogynistic Content: 58 videos promote misogyny, perpetuating sexist attitudes, objectification of women, and contributing to gender-based discrimination and violence.

Anti-Asian Content: 41 videos target Asian communities, spreading xenophobia, racism, and perpetuating harmful stereotypes.

Terrorism Footage: 26 videos feature terrorism-related content, showcasing violent acts and promoting extremist ideologies associated with terrorism.

Anti-Migrant/Refugee Content: 25 videos promote anti-migrant or anti-refugee sentiment, contributing to xenophobia and discrimination against individuals seeking safety and better lives in different countries.

TikTok’s Community Guidelines, shedding light on the categories of content that pose challenges and the measures taken to address them.

Attacks on the Basis of Protected Attributes: With 344 videos falling into this category, TikTok takes a firm stance against content that promotes attacks based on protected attributes such as race, ethnicity, gender, religion, and more. Such content undermines the principles of equality and respect that TikTok strives to uphold.

Hateful Ideology, Including Claims of Supremacy and Denying Violent Events: TikTok prohibits content that promotes hateful ideologies, including claims of supremacy and denying violent events. Recognizing the potential harm caused by such content, TikTok has identified and removed 341 videos that perpetuate these harmful narratives.

Dangerous Individuals or Organizations: In an effort to protect users, TikTok’s Community Guidelines target content associated with dangerous individuals or organizations. The platform has taken action against 235 videos that may pose a risk to user safety or public security.

Threats and Incitement Towards Violence: TikTok firmly stands against content that threatens or incites violence. Recognizing the potential harm that such content can cause, TikTok has removed 62 videos that violated these guidelines, prioritizing user safety and the well-being of its community.

Slurs Based on Protected Attributes: TikTok aims to foster an inclusive environment where users feel respected and valued. As part of this commitment, the platform has removed 33 videos that contain slurs targeting protected attributes, ensuring that discriminatory language does not find a place on the app.

Regulated Goods – Weapons: In line with safety concerns, TikTok prohibits content related to regulated goods, including weapons. Recognizing the potential risks associated with the promotion of weapons, TikTok has removed 9 videos that violated these guidelines.

User Data

In today’s digital age, social media platforms play a significant role in shaping public discourse and disseminating information. Understanding the nature of content found on different platforms is crucial for gaining insights into various perspectives and the influence they may exert. This article presents a preliminary comparative analysis of content across major platforms, namely TikTok, Twitter, and YouTube, focusing on search results related to the People’s Liberation Army (PLA) and China’s military. By examining the nature of content and its alignment with the Chinese Communist Party (CCP), we can discern initial patterns and trends.

TikTok: When searching for “PLA” on TikTok, the preliminary analysis indicates an overwhelming pro-CCP stance. The majority of search results were found to be favourable towards the CCP, suggesting a potential bias or content moderation practices that favor positive depictions. Notably, the lack of relevant results for “Wuhan lab” raises questions about content moderation policies and the extent to which certain topics are allowed or prioritized on the platform.

Twitter: In contrast to TikTok, the preliminary analysis of search results on Twitter for “PLA” showed a more balanced distribution of content. Results were found to be roughly equal parts favourable and unfavourable towards the CCP. This diversity of perspectives suggests that Twitter serves as a platform for a wider range of opinions and discussions related to the PLA.

YouTube: Among the platforms analyzed, YouTube’s top search results contained the lowest levels of content favourable to the CCP. While not explicitly mentioned in the comparative analysis, it can be inferred that content related to the Chinese military, including the PLA, was generally less flattering to the CCP than on TikTok or Twitter. This may indicate differences in the user base, content creators, or content moderation practices on YouTube.

Content analysis is a research methodology used to examine and analyze various types of media content systematically. In this article, we will focus on the methodology employed for coding content, specifically regarding depictions of the Chinese Communist Party (CCP) and the presence of misinformation. By understanding the coding process, we can gain insights into how content was categorized and analyzed in relation to the CCP and its portrayal.

Coding Favourable Depictions of the CCP: During the content analysis, favourable depictions of the CCP were coded to identify instances where positive portrayals of the party were present. This could include content highlighting the achievements, policies, or leadership of the CCP. Additionally, any propaganda, military promotions, or praise for the Chinese Communist Party were also coded in this category.

Identifying Misinformation: Another crucial aspect of the content analysis methodology was to identify content containing misinformation. This involved coding instances where false or misleading information was present. Examples of such misinformation could include refutations of established facts, misleading claims, or the spread of conspiracy theories. Specific instances mentioned in the coding process included false claims about the Uyghur genocide, vaccine misinformation, and unfounded allegations about the January 6 Capitol riots.

Coding Unfavourable Depictions of the CCP: The coding process also involved identifying content that depicted the CCP unfavorably. This category included criticisms of CCP leadership, discussions about China’s human rights record, expressions of support for Tibetan and Taiwanese independence, and content that criticized CCP censorship practices.

Content Absent Misinformation: In order to maintain a balanced analysis, content that did not contain misinformation but still discussed relevant topics was also coded. This included factual reporting on election outcomes, discussions of personal political views, and reports on investigations into significant events or individuals. Examples mentioned in the coding process included reporting on investigations related to Hunter Biden’s laptop and discussions about the presence or absence of a “red wave” during the U.S. 2022 midterms.

Coding as N/A: In some cases, content was coded as N/A (not applicable). This category included instances where the content was unrelated to the target topic or did not present any discernible claim. These pieces of content were excluded from further analysis as they did not contribute to the research objectives.

Nationality is important?

The study focused on the impact of app nationality on user trust by comparing users randomized to either TikTok (a Chinese app) or Instagram (an American app). The results showed that both before and after being presented with an argument, users who saw the TikTok app found it significantly less trustworthy than those who saw the Instagram app.

While the other two independent variables did not show significant differences in the distribution of preheld trust, the app’s nationality was the only variable that users were exposed to before treatment. This suggests that users already had preconceived beliefs about each app, and trust in TikTok was lower even before any attempt at persuasion was made. The majority of users correctly identified the nationality of both apps, supporting the idea that the app’s nationality deeply influenced trust levels.

Contrary to the hypothesis (H1), arguments concerning Instagram were found to be more effective in decreasing trust levels compared to arguments about TikTok. This could be attributed to the lower distributions of initial trust for TikTok. The study further explores this relationship in the subsequent section.

The study’s findings highlight the significant influence of app nationality on user trust, emphasizing the importance of users’ preconceived beliefs and perceptions.

The study aimed to investigate changes in trust for TikTok and Instagram based on users’ initial trust levels. The analysis revealed that users who already held a distrusting view of either app before treatment showed minimal changes in trust. However, the distribution of trust after treatment appeared lower for those who saw TikTok compared to those who saw Instagram in higher trust categories, although this difference was not statistically significant.

To further explore the persuasive impact of arguments, the study compared the effectiveness of arguments for TikTok and Instagram among users with different levels of initial trust. Users who already distrusted an app were excluded from the analysis. The results, presented in Table 5.5, indicated that when grouped by initial trust levels, users who saw TikTok and those who saw Instagram were influenced similarly by the arguments. There were no significant differences in how persuasive the arguments were, and no strong evidence of differences in distribution by trust categories.

However, it was noteworthy that the proportion of respondents who already distrusted TikTok was substantially higher compared to those who saw Instagram. Over one-third of TikTok users (35.8%) found the app somewhat or very untrustworthy, while only 16.7% of Instagram users held similar views. This difference in initial trust was supported by the results of the Mann-Whitney-Wilcoxon tests conducted across the apps.

Personal Relevance to App

In a recent study examining the persuasive power of arguments on user trust in TikTok and Instagram, researchers uncovered the significant role of personal relevance in shaping users’ responses. The study explored how users’ app usage frequency and non-usage status affected the effectiveness of arguments in influencing trust. These findings shed light on the crucial link between personal relevance and user attitudes towards apps, particularly in the case of TikTok.

Usage Frequency and Argument Effectiveness: One of the key findings of the study revealed that users with higher app usage frequency were less likely to be persuaded by arguments and more inclined to maintain their trust in the app. This suggests that apps with a large user base and higher engagement levels are less susceptible to claims of untrustworthiness due to their heightened relevance within the user population. Surprisingly, high-frequency users exhibited similar levels of trust for both TikTok and Instagram, regardless of the presented argument. This indicates that negative news coverage and securitizing arguments failed to convince this population to discontinue their app usage. It aligns with existing research that suggests well-informed individuals are more resistant to persuasion through media information. Although personal relevance may not directly parallel political knowledge, high usage frequency implies a stronger connection and knowledge of the app, making arguments against TikTok generally ineffective among this group of users.

Effects of Media Attention on High-Frequency Users: Another possible explanation for the lack of differentiation between TikTok and Instagram among high-frequency users is the impact of media attention. Given that this study took place after the announcement and media scrutiny surrounding the TikTok ban, it is possible that some high-frequency users ceased their extensive app usage due to external influences. This aligns with the hypothesis that those who would have been persuaded by securitizing arguments were already convinced before the study, leading to a decline in TikTok usage. Consequently, the remaining high-frequency TikTok users comprised individuals who were not persuaded by securitizing arguments and did not significantly differ from high-frequency users of Instagram.

Non-Users’ Trust Response: The study also examined the trust levels of non-users of TikTok and Instagram. Before treatment, non-users displayed varying levels of trust between the two apps. However, after exposure to arguments, their trust distributions did not significantly differ. This implies that the persuasive power did not lie in the securitizing elements of the argument but rather in the mere act of presenting information about an app to which these individuals had little personal relevance. The researchers hypothesize that due to their low personal relevance, non-users were easily convinced by arguments, resulting in substantial decreases in trust. Therefore, considering the already lower trust in TikTok due to prior media exposure, the effect of arguments was more pronounced for Instagram, which generally enjoyed higher levels of trust. Consequently, individuals with no personal stake in the app experienced an immediate decrease in trust upon being explicitly informed of an app’s untrustworthiness or dubious data collection practices, regardless of the app’s national origin. Hence, for non-users, it was not securitizing arguments but negative media coverage and exposure to app dangers that proved effective in shaping their perceptions.

TikTok, the immensely popular short-form video-sharing app, has gained significant traction among university students worldwide. While TikTok offers a platform for creative expression and entertainment, it is essential to recognize the potential privacy concerns associated with its use. This article examines the specific privacy issues that university students may encounter while engaging with TikTok and offers insights into safeguarding personal information in this digital landscape.

Data Collection and Sharing: One of the primary privacy concerns on TikTok revolves around the collection and sharing of user data. The app collects various types of information, including profile data, location data, device information, and even user-generated content. While this data enables personalized recommendations and enhances the user experience, it also raises questions about data privacy and security. University students, who often share personal details and engage in trending challenges on TikTok, should be aware of the potential risks associated with their data being stored and potentially shared with third parties.

Geolocation and Personal Safety: TikTok’s geolocation features, which allow users to tag their location or participate in location-based challenges, can inadvertently expose university students to privacy risks. Students may unintentionally reveal their current whereabouts, which can compromise their personal safety, particularly when sharing videos from on-campus locations or private residences. It is crucial for users to exercise caution when utilizing geolocation features and consider the potential implications of disclosing their physical whereabouts.

Public vs. Private Accounts: TikTok offers users the option to maintain either public or private accounts. Public accounts are visible to all TikTok users, while private accounts restrict access to approved followers only. University students should carefully consider the visibility of their profiles and the content they share. Public accounts may attract a wider audience but also expose personal information to a broader range of individuals. Private accounts provide greater control over who can view content, reducing the risk of unauthorized access. Students should evaluate their privacy preferences and select the account type that aligns with their comfort level and desired level of online exposure.

Third-Party Apps and Data Security: The TikTok user experience is often enhanced by utilizing third-party applications or integrations. While these features can add fun and creativity to videos, they may also present privacy risks. Third-party applications could potentially access users’ TikTok data or request permission to collect additional personal information. University students should exercise caution when granting permissions to external applications and carefully review their privacy policies and data handling practices. It is advisable to prioritize applications with robust security measures and a trustworthy reputation.

Protecting Personal Information: To mitigate privacy risks on TikTok, university students can take proactive measures to safeguard their personal information. These include:

1. Reviewing Privacy Settings: Familiarize yourself with TikTok’s privacy settings and customize them to align with your preferences. Adjust who can view your videos, comment on your content, and send you direct messages.

2. Minimizing Personal Details: Be cautious about sharing sensitive personal information on your profile or in videos. Avoid including identifiable details such as full names, addresses, or contact information.

3. Limiting Location Sharing: Consider disabling geolocation features or being selective about when and where you share your location on TikTok.

4. Regularly Updating Privacy Policies: Stay informed about TikTok’s privacy policies and terms of service. The app frequently updates its guidelines, and being aware of these changes helps you make informed decisions about your privacy.

5. Exercising Caution with Challenges: Participate in challenges responsibly and be mindful of the potential consequences. Some challenges may involve sharing personal information or engaging in risky behavior, so evaluate the potential privacy implications before joining in.


  • Committee on Energy and Commerce Hearing entitxled “TikTok: How Congress Can Safeguard American Data Privacy and Protect Children from Online Harms” [March 23, 2023]
  • TikTok, It’s Threat to National Security O’Clock

Leave a Reply

Your email address will not be published. Required fields are marked *