Stop Tech Misogyny: The Long Chain of Gender-Based Cyber Attacks and Disinformation

Disinformation targets women's bodies. Perpetrators use issues surrounding women's bodies and morality as a gateway. This false information dictates how women "should" behave, speak, and dress. A survey conducted by Konde.co from November 3 to 10, 2025 revealed this.

Dewi (not her real name) suddenly felt afraid to hold her phone.

Every time a notification came in on her phone, she felt like a bullet was being fired from a long barrel, hitting her. This made her afraid.

“I was afraid to hold my gadget, afraid to see notifications,” she recalled.

Dewi’s world felt like it was shrinking. There were days when she couldn’t explain what she was going through. She chose to stay silent, conserving the energy that was constantly being drained by her anxiety.

“I don’t have the energy to explain,” she said.

The fear began when she came to LBH Jakarta on September 18, 2017, for a discussion titled “Preserving the Memory of ’65.” Suddenly, rumors spread quickly that there was a “group of PKI members” at the meeting.

Before she could process all this, a group of people suddenly stormed the LBH Jakarta office. The group grew larger and louder. After that, Dewi heard hard objects being thrown at the LBH Jakarta building where she was located.

“We were then surrounded for hours,” said Dewi.

Read More: Indonesian Academics, Do We Stop Taking Accountability on Research Papers?

Dewi recounted that everyone felt anxious about what to do at that moment, because it was impossible to leave, but how long could they hold out?

Stones continued to hit the walls of the building, and she heard shouts of “PKI!” through the window. In the darkness, Dewi thought to herself, “Oh no, I’m not going to survive in this building.”

That night, she kept praying that she and her friends there would be safe. Finally, they were able to leave after waiting for about 12 hours.

“We were only able to get out around dawn,” she continued.

A few days after the siege, documentation of the event spread across the digital world. What worried Dewi was not that the image had gone viral, but the fact that her family in her village had received it with comments that attacked her identity.

Dewi’s body, her hijab, her expression, her position in front of the stage were spread around, as if they had been taken over by a narrative that was aimed at her. Misinformation spread, and negative comments appeared on social media.

“The comments were shocking, numbering in the thousands. Some wrote, ‘Why is she wearing a hijab but attending such an event?’” she said softly.

She was actually just standing in the front row taking pictures, Dewi was doing her job as part of the campaign and media division of a women’s organization. However, a close-up of her face was used to create accusations that Dewi, the woman wearing a hijab mentioned in the picture, was involved with the PKI and deserved to be punished.

“Yeah, it was just a spotlight,” she said when asked about the video.

Read More: Arrests at the End of the Day: Arrested While Hanging Out, Allegations of Sexual Harassment

In the early days, Dewi sought help from Purple Code, a feminist organization that advocates for digital security issues. Her office also encouraged her to seek counselling at the Pulih Foundation.

She did not expect her social life to be hindered. Her family forbade Dewi from attending similar events, which she felt restricted her movements.

“My family was more like forbidding me, saying, ‘If there are events like that, don’t come anymore. It became a restriction that limited me,’” she said.

Her friends did not distance themselves, but Dewi decided to withdraw. Some responses from her close circle made Dewi uncomfortable, such as “Is that really you? Yes, if necessary, it’s okay. What are you doing there?” It sounded like a short decision that she shouldn’t have been there in the first place.

These comments traumatized her. In the end, Dewi had to choose to persevere. The trauma did not disappear, but she learned to name what had happened: this is what is called digital violence, disinformation, doxing? Then there is misogyny from public comments. If digital violence is a siege, then Dewi’s voice today is a small step to escape that siege.

Dewi’s story is proof that false narratives about morality, domestic roles, and women’s bodies are often used to discredit, silence, or intimidate those who speak out in public spaces. In a political context, disinformation-based attacks against women activists are often combined with hate speech and digital sexual assault.

The same pattern can be seen in the experiences of Citra and Kinah, two women from South Sulawesi who were interviewed by Konde.co. Both faced dangers that began in the digital space and then spread to their real lives.

Baca juga: Prabowo-Gibran’s One-Year Report Card From A Gender Perspective

For Citra (22) (not her real name), a lighthearted post about spending time with her queer friends on social media became slander that led to a domino effect of disinformation; a barrage of anonymous accounts attacked her, her gender identity was twisted, and her university email was bombarded with messages. The attacks continued in the same month through a scam that resulted in the loss of 38 million rupiah from her savings.

When she reported the incident to the police, her report was ignored. This left her not only without her reputation and material possessions, but also her sense of safety and trust in the system.

On the other hand, Kinah (31), a female journalist in Eastern Indonesia, faces threats every time she publishes critical reports on illegal practices in her region. Slander about bribery, verbal abuse, and even physical threats came after she wrote about illegal solar energy, and the perpetrator’s henchmen even came to her house. ‘s bad experiences with the authorities, ranging from attempts to “slip her money” to being brought face to face with the businessman she had exposed, made her reluctant to report the threats even though they were becoming more intense (Citra and Kinah’s stories will be presented in more detail in the second article of this series).

This long-standing situation became the basis for Konde.co to conduct research on women’s experiences with misinformation and disinformation that lead to information- and digitally-based violence. From November 3–10, 2025, Konde.co conducted a survey of 115 female respondents and individuals of diverse genders and sexualities.

Baca juga: Arrests at the End of the Day: Arrested While Hanging Out, Allegations of Sexual Harassment

From the initial data collected, there were 127 respondents; however, before analysis, a filtering process (data cleaning) was carried out to ensure the quality and validity of the data.

Konde.co summarizes the research and findings in the following simple game. Readers can choose a role and select answers based on the findings and stories in this article:

Konde.co Research: The Twists and Turns of Women’s Experiences in the Flood of Information

Game based on Konde.co’s research and findings, 2025.

The online space has become an important arena for women, especially young women, to seek information, interact, and negotiate their trust in what they read and share.

Most respondents (67 people or 58.26 percent) were in the 18–24 age range. The 25–34 age group followed with 43 respondents (37.39 percent). The remaining 4.35% of respondents were in the 35+ age group.

Read More: MBG Must Be Stopped: Poisoning Scandals, Gender Bias, And The Shadow Of The Military

In terms of identity, 97.39 percent of respondents were women and 2.61 percent were from diverse gender groups such as trans women, non-binary, and genderqueer.  Most respondents lived in Western Indonesia (86.96 percent), while 13.04 percent were from Central and Eastern Indonesia. In terms of place of residence, 57.39 percent of respondents lived in big cities, 23.48 percent in medium-sized cities, 13.91 percent in small cities, and 5.22 percent in villages.

In terms of popular information channels, Instagram ranked highest as the main source with 22.76 percent, followed by TikTok (20.34 percent), WhatsApp and messaging apps (18.89 percent), and Twitter/X (15.50 percent). Meanwhile, YouTube (8.96 percent), online media (5.81 percent), and Facebook (4.60 percent) are still accessed but no longer serve as primary channels. Only a few still rely on television (1.45 percent), newspapers/magazines (0.48 percent), or radio (0.24 percent). This pattern illustrates how visual and fast-paced digital spaces like TikTok and Instagram have become the new hub for information consumption, often blurring the lines between entertainment, opinion, and facts.

Women’s Information Consumption & Literacy Patterns

Konde.co

Interestingly, the most trusted sources of information for respondents are not official institutions or the press, but family members (22.90 percent), followed by teachers or educators (19.71 percent), journalists (15.65 percent), and close friends (15.36 percent). Public figures such as influencers were trusted by 9.28 percent of respondents, while the government and officials only accounted for 2.61 percent.

The greater trust in personal networks than in public institutions indicates a crisis of credibility in the digital information space . Women tend to seek the truth from social circles they consider safe, while sources with formal authority, such as the government, are less trusted.

Read More: Leftist Books Are Not Dangerous; What Is Dangerous Is A State That Confiscates Books

Their level of caution is also reflected in the steps they take before sharing information. Most (34.17 percent) admit to first looking at other people’s comments or responses before trusting a piece of information. This step is a form of social validation that is commonly done on social media.

Although it seems critical, this step is still vulnerable because it depends on public opinion, which can be manipulated by fake accounts or buzzers.

Then, 30.15 percent choose to read the full content of the information and assess it based on personal intuition, while 13.57 percent cross-check with other sources. There are also 17.09 percent who share information with others for confirmation. The rest (5.03 percent) choose to delay dissemination in order to verify further.

These patterns illustrate that young Indonesian women actively navigate the digital space with a combination of caution, intuition, and trust in their social networks. However, the way they assess truth is still heavily influenced by social and emotional contexts, rather than by a robust information verification system.

Read More: Arrests at the End of the Day: Arrested While Hanging Out, Allegations of Sexual Harassment

This observation shows that digital literacy cannot be measured solely by technical ability, but also by the structure of trust and social security around them.

In the context of digital rights, this situation poses a major challenge. The right to obtain accurate information, the right to personal data protection, and the right to be free from online violence are part of the digital freedoms that every citizen should ideally have. However, when women rely more on personal intuition or social validation to determine the truth, it indicates how limited the support infrastructure and sense of security are in the digital space.

Disinformation targeting women often uses issues of body image, morality, and social roles as entry points. When false information circulates about how women “should” behave, dress, or speak, the impact not only shapes public opinion but also narrows their space of freedom.

The Wave of Gender-Based Disinformation in the Digital Space

Further findings from the Konde.co survey show the intensity of misinformation received by women in the online space. As many as 46.96 percent of respondents admitted to frequently receiving false or misleading information, while 27.83 percent said they received it quite often, and 14.78 percent said they received it very often.

Only a small portion, around 10.43 percent, said they rarely experienced it, and no respondents were completely immune to exposure to disinformation.

These figures show that the flow of false information has become part of women’s daily lives in the digital world, a situation that blurs the line between information and manipulation.

This is reinforced by the fact that 68.7 percent of respondents admitted to having seen or been directly targeted by false information attacking women or certain identities. Only 31.3 percent admitted to never having experienced this. This means that two out of three women in this survey have encountered forms of disinformation that target gender identity, religion, sexual orientation, ethnicity, or region.

TikTok (27.17 percent) emerged as the channel where false information appeared most frequently, followed by Instagram (21.89 percent), WhatsApp (14.72 percent), Facebook (12.83 percent), and Twitter/X (12.83 percent). Platforms that were once considered more formal, such as YouTube, online media, and television, only accounted for a small portion of less than 5 percent. This pattern is consistent with previous findings: the faster the pace of communication and the higher the dependence on visual formats, the greater the opportunity for the spread of unverifiable disinformation.

Read More: MBG Must Be Stopped: Poisoning Scandals, Gender Bias, And The Shadow Of The Military

Women’s responses when they become victims of false information show various forms of resistance. As many as 37.39 percent choose to report through the “report” feature on digital platforms, while 28.99 percent block or cut off communication with the perpetrator.

Approximately 17.23 percent talk to friends or family to seek emotional support, and 9.24 percent contact support organizations such as SAFENet, LBH APIK, or Komnas Perempuan. Only 7.14 percent report to the police.

This last number shows that legal channels are still not the first choice, either because of doubts about their effectiveness or because of previous experiences that were not victim-friendly.

Experience in Identifying False Information
Experience in Identifying False InformationSpecific themesDescription/ indicatorQuotation from respondent
Self-verification (cross-check)a. Check with official or credible sourcesRespondents seek the truth from official media, government websites, institutions, or related companies.“Verify the truth through official channels, such as those belonging to the government or companies directly.” / “I will first check the official PLN account to see if they have announced the news.”
 b. Compare with other newsRespondents search for similar news in various media or platforms to see consistency.“I usually look for information from several other sources before drawing conclusions.” / “Look for several news articles about it, then draw conclusions.”
 c. See confirmation from the relevant partiesSeek or read clarification from individuals or institutions mentioned in the news“After I found out more through reliable sources.” / “I saw clarification from the relevant parties.”
 d. Check the facts on the hoax verification websiteRefer to fact-checking channels or websites (e.g., Kominfo, Mafindo, mainstream media).“After checking the fact-checking website, it turned out to be untrue.” / “After checking a reliable news website and receiving clarification from the relevant parties.”
 e. Check the time, date, and contextRespondents paid attention to aspects of publication timing or irrelevant context.“Check the source, check the date, and compare with other media.”
Content Analysisa. Provocative/ exaggerated title or narrativeSensational language, exaggeration, or cornering one party“The information is too provocative and puts certain parties in a bad light.” / “The headline is very provocative.”
 b. Unreasonable / unrealisticThe information contained herein contradicts logic or common experience.“Because it’s so unreasonable.” / “The information is fabricated and unrealistic.”
 c. Inconsistency between text and visualsThere is a difference between the narrative and the visual evidence presented.“I like to replay videos to see if the narration matches the video.”
 d. Unreliable/ disorganized writing styleCharacteristics of unprofessional text, many typos, informal style“The writing and presentation style seem contrived.” / “The language is blunderous.”
 e. Incomplete data / 5W1H unclearThe information does not explain who, what, when, where, why.“The 5W 1H information is unclear.” / “The data source is incomplete.”
Observing visual cues or AI technologya. Weird image / AI-generated resultDetect through visual anomalies, unnatural faces, or blurry images.“If it’s an image, usually the strange ones are images from AI.” / “The source of the information appears to be edited or AI.”
 b. AI/deepfake videosRecognizing the characteristics of fake videos produced by AI technology“From various news sources about the woman in the pink hijab, it turns out that the video was produced by AI.” / “AI-generated video.”
Seeing public comments and reactionsa. Reading the comments sectionChecking other users’ responses on social media or news channels“Look at the comments section or check on Google.” / “Open the comments section, usually someone will say it’s a hoax.”
 b. Seeing differences of opinion / clarifications from netizensComments indicate refutations or other facts that contradict the information“There are comments in the comments section saying that it’s fake.” / “People’s comments include concrete evidence.”
 c. Uniform comments / buzzer accountsRespondents considered uniformity of comments to be a sign of disinformation.“I know because it’s a buzzer account, the comments are uniform.”
Discussions with othersa. Ask or confirm with someone close to youVerify through friends, family, or community“I confirm with those closest to me.” / “I discuss it with others so it doesn’t spread.”
 b. Obtaining clarifying information from othersOther people said that the information was false.“Someone told me that it was fake news.”
Firsthand experience proves the hoaxa. Experiencing firsthand the difference between advertising and realityRecognizing hoaxes after trying advertised products/services“I bought a product advertised on TikTok, but the results were not as promised.”
 b. Discovering different facts in the fieldAfter investigation, the facts differed from the initial information.“I saw similar information, but there was no evidence at the scene.”
Passive/avoidant attitudea. Ignoring informationDid not follow up because it was deemed irrelevant or suspicious.“Ignored it.” / “Skipped the information even though it was discussed.”
 b. No response / not verifiedNo confirmation attempt, just skipped“I ignored it immediately.”
Media understanding & critical literacya. Recognizing unreliable sourcesRecognize the characteristics of anonymous, unverified, or unofficial sources“The source is unclear and the author is not credible.”
 b. Recognizing bias and framingRecognizing content that is biased or targets certain groups“News that is biased and factually inaccurate from one side.” / “The content is usually biased.”
 c. Looking at the context of buzzers or political amplificationIdentifying accounts that spread hoaxes in an organized manner“Spread in succession without mentioning the source, the content is biased.”
Reference to the reviewer community or platform featuresa. Community Notes / hoax flags on the platformUse platform features such as X (Twitter) or media to see hoax labels.“Community notes on X.” / “Approved if it’s fake.”
 b. Educational platform / fact-check channelReferring to fact-checking channels (YouTube, Instagram, news websites)“Viewing on a trusted YouTube channel.” / “I viewed it on a credible viral news account that has been researched by the admin.”

From a qualitative perspective, respondents described various ways they recognized false information. Many conducted independent verification, such as checking official sources or comparing with other news reports.

“I first checked the official PLN account to see if they had announced the news,” wrote one respondent.

Read More: The Banality Of State Violence, We Urgently Need Police Reform

Others sought direct clarification from the relevant parties: “I look for clarification from the relevant parties,” or used fact-checking websites: “After checking the fact-checking website, it turned out to be untrue.”

These practices demonstrate active efforts to maintain the integrity of information, although not everyone has access or the habit of doing so consistently.

Experiences and Observations of False Information Targeting Women or Minorities
Experience and ObservationSpecific themesDescription/ indicatorQuotation from respondent
Attacks on women’s religious and moral identitya. Religion-based moral stigmaWomen are attacked for allegedly violating religious or moral norms; often associated with the stereotype of “immoral women.”“Called anti-Christian.” / “Women living with HIV are stigmatized as promiscuous, when in fact they are victims of their husbands.”
 b. Attacked because of religious identity (wearing a hijab)Women wearing hijabs were symbolically attacked, their bodies and identities becoming targets of hate speech.“I was a victim of an attack by LBH Jakarta in 2017… I was attacked with comments targeting my body and my hijab.”
Slander and defamation (character assassination)a. Digital slander (fake chats, text manipulation)False information was used to damage a woman’s personal reputation“There was an account on Twitter spreading fake screenshots of chats as if she was involved in unethical acts.”
 b. False accusations in the workplace/mediaWoman attacked for her work or public opinion“Accused of receiving money after reporting on illegal solar control.”
 c. Daily social slanderSpreading rumors and slander in non-digital social circles“I was once accused of being a lesbian.” / “My friend twisted the story, making me look bad.”
Visual manipulation & sexual content (AI and non-AI)a. DeepfakePhotos/videos of women are sexually manipulated using AI technology or manual editing.“There are indecent videos posted using AI with the faces of certain women.” / “There are women whose faces have been replaced with those of naked women.”
 b. The spread of content exploiting women’s bodiesPhotos of women’s bodies are used to humiliate or harass them.“I often find indecent posts of women that turn out to be edited.” / “Content containing parts of women’s bodies.”
cyber-harassmenta. Threats using edited photosThe threat of exploiting personal images to intimidate victims“I was threatened with AI-edited photos of my face.”
 b. Doxing and gender-based intimidationThe identities of female journalists are exposed and attacked in the digital public sphere.“Female journalists are victims of doxing, their identities attacked with false information.”
Digital fraud and privacy violationsa. Duplication of accounts or identity theftA woman’s account or personal number was forged to deceive others.“My friends’ WhatsApp numbers were duplicated and reused with their profile photos.”
 b. Financial fraud based on personal dataUse of women’s data or names for financial fraud“I was billed for a loan even though I never borrowed any money.” / “I was accused of borrowing money through a digital platform.”
Political propaganda and manipulationa. Political identity manipulationWomen are targeted in false political campaigns or framing“My face was used to defend a certain party figure, even though I didn’t.”
 b. Political issues are used for criminalizationFalse information was used to criminalize women activists.“I was a victim of the 2017 LBH Jakarta attack because of the spread of hoaxes that there were PKI activities that night.”
Gender-based attacks on social mediaa. Spreading gender-based hateWomen are collectively slandered, ridiculed, or attacked on social media.“I often see photos or personal information about women being shared on WhatsApp groups to embarrass them.”
 b. Cyberbullying against womenDigital psychological violence that causes mental distress“False information caused my friend to be publicly judged and suffer mental distress.”
Psychological and social impactsa. Fear and loss of securityWomen felt threatened, embarrassed, or lost confidence after the attacks“I was annoyed because I couldn’t register as a KPU member.” / “It ruined the atmosphere.”
 b. Damage to reputation and social relationshipsFalse information damages social relationships and personal reputation.“False news is spread to damage reputations.” / “It makes me look bad among my friends.”

However, behind this caution, many women are also direct targets of identity-based attacks. Attacks on morality occur quite often, for example, through negative labelling of women who are considered not to conform to norms. “Called anti-Christian,” wrote one respondent. Another added, “Women with HIV are stigmatized as promiscuous, when in fact they are victims of their husbands.” There are also more personal experiences, such as, “I was attacked with comments attacking my body because I wear a hijab.”

Read More: From Obligation to Right: Rethinking “Sex as Duty” for Muslimah

In addition to moral stigma, disinformation also takes the form of digital slander and visual manipulation. Several respondents mentioned having seen or experienced the spread of fake chats, photos, or digitally manipulated videos to damage women’s reputations.

“There is an account on Twitter that spreads fake chat screenshots as if she was involved in unethical acts,” said one respondent. The phenomenon of deepfakes also appeared in other testimonies. There were posts of obscene videos using AI with the faces of certain women.

Another form of attack that emerged was digital threats and blackmail, such as the use of edited photos to scare victims: “I was threatened with an AI photo that had my face edited onto it,” or the practice of doxing female journalists, which resulted in physical and psychological threats.

On the other hand, some women also reported forms of fraud based on personal data, such as the duplication of WhatsApp accounts or the use of their names for illegal online loans.

In addition to the direct impact, women who are victims of false information also face long-term social, emotional, and even political consequences. Some feel a loss of security, shame, withdraw from digital public spaces, and are even unable to exercise their political rights.

Read More: Indonesia’s Deadly Protests: Economic Chaos and Political Frustration

 “I was annoyed because I couldn’t register as a member of the KPU,” wrote one respondent, describing how digital slander directly impacted her employment and political rights after information was manipulated to suggest that she supported a certain political party.

Another said, “False news was spread to damage my reputation… to corner me among my friends.”

Amidst this massive exposure, there are still groups of women who are trying to understand the mechanisms of disinformation more critically. They recognize the telltale signs of false information: provocative headlines, messy language, incomplete data, or narratives that corner one side. Some have even begun using educational features on platforms like Community Notes on X (Twitter) or fact-checking channels on YouTube and Instagram. Although their numbers are not yet dominant, this group points to an important direction: the growth of digital awareness that is not merely reactive but reflective of the information structures they consume.

Imitation Intelligence, a New Form of Deception that Haunts Women

Whereas women previously faced fake news and slander on conventional social media, they now face more sophisticated, subtle, and difficult-to-trace forms of attack. In the last five years, disinformation and digital manipulation involving artificial intelligence (AI) technology has become increasingly widespread, including with the intention of attacking women.

Research by Konde.co found that 46.09 percent of respondents said they often saw false information created by AI, while 27.83 percent admitted to seeing it occasionally. Only 3.48 percent claimed to have never seen it. This shows that almost all women in this survey have come into contact with machine-manipulated content, ranging from altered images and fake voices to narratives that are entirely fabricated by technology.

Furthermore, 86.96 percent of respondents said they had directly seen AI-based fake content that attacked women or certain groups based on religion, sexual orientation, ethnicity, or region. Only 13.04 percent had never seen it.

This means that machines are not only increasingly capable of generating misinformation, but also reinforcing old patterns of gender-based violence in new ways.

TikTok once again ranked first as the channel where respondents were most exposed to AI-based fake content (33.18 percent), followed by Instagram (23.50 percent), Facebook (15.21 percent), and Twitter/X (14.29 percent).

Read More: Sexism in Student Arrests and Maternal Activism Mothers Defend Their Children’s Struggles

WhatsApp and YouTube ranked lower, at around 6 percent each. Platforms with algorithms that prioritize visuals and engagement are fertile ground for the spread of image and video-based manipulation.

The public’s attitude towards AI-generated content shows ambivalence. The majority, 55.65 percent, said they would not share content if they knew it was AI-generated, while 38.26 percent said they would still share it if they considered it “valid” based on their personal assessment. Only 6.09 percent said they would openly continue to share it unconditionally.

Concerns about the spread of AI-based fake information are also very high. As many as 68.70 percent of respondents said they were very concerned, and 26.96 percent were concerned. Only a handful felt calm. This concern grows along with the awareness that the manipulative capabilities of AI exceed the ethical and legal boundaries that have been in place.

When asked to assess government policies related to AI technology, the majority of respondents rated them as poor (56.52 percent) and very poor (29.57 percent). Only 13.04 percent rated them as good, and 0.87 percent as very good. This view illustrates the huge gap between the pace of technological development and the protection of the groups most vulnerable to its impact.

Read More: Sexism in Student Arrests and Maternal Activism Mothers Defend Their Children’s Struggles

When asked whether AI brings more benefits or harms to women, 46.96 percent of respondents answered “more harm,” only 5.22 percent saw “more benefits,” while 46.09 percent considered it “balanced.” Two people said they did not know.

Although some see opportunities, the majority still believe that AI is more often used as a tool for exploitation than empowerment.

Qualitative findings reinforce this view. Respondents reported various experiences of AI abuse, ranging from visual manipulation and sexualization, voice falsification, to digital identity-based fraud.

Experience and Observations on False Information Targeting Women or Minorities Based on AI
Experience and Observations on False Information Based on AISpecific themesDescription/ indicatorQuotation from respondent
Visual manipulation and sexualization with AI  a. Deepfake or manipulation of women’s facesA woman’s face is replaced or pasted onto another person’s body, usually to create an exploitative sexual image.“The face in the indecent image is edited to be her face.” / “Someone whose photo has been made to appear naked.”
 b. AI-generated indecent photos or videosAI is used to create fake pornographic content, which is often distributed on social media.“There are obscene and vulgar images that turn out to be edited by AI.” / “I’ve seen photos of women edited by AI to be naked.”
 c. Celebrity/actress face manipulationPublic figures are targets of visual exploitation through AI.“The actress I like had her photos edited to show her without clothes.”
 d. Distribution on public platformsAI-generated content is distributed on TikTok, messaging groups, or other platforms without permission.“On TikTok, change the image to something obscene.” / “My friend’s photo was changed to something obscene and distributed in a chain message.”
Misuse of AI voice and video (voice cloning/ impersonation)a. Voices are faked to deceive or threatenAI is used to imitate the victim’s voice in scams or threats.“There was someone else who used my voice to call my parents and say that I had been kidnapped.”
 b. Recording of voice/face without permissionThe perpetrator recorded the victim’s face/voice via video call and then used it.“He recorded my voice and face when I answered the video call. It was only a few seconds, but I was scared.”
 c. Suspicious automated audio/ telemarketingAutomatic voice technology is used for conversations or spam that make victims uncomfortable.“An unknown number called repeatedly, with the same female voice like telemarketing.”
Digital identity fraud and misuse a. Use of photos for fraudThe victim’s visual identity (photo) was used to deceive others.“My mother’s photo was used as the profile photo of a WhatsApp account to borrow money.”
 b. Account or bank account hackingDigital attacks cause economic losses“I was once scammed and suffered an economic loss of 38 million rupiah when my account was hacked.”
 c. Other AI-based fraud methodsA type of fraud that uses AI content or digital manipulation.“My friend was once a victim of digital fraud involving cash on delivery packages.”
Misuse of images for commercial or political purposesa. Manipulation for unauthorized advertisingA person’s photo/video is used for advertising without a contract or permission.“I’ve seen videos of people being edited to advertise products even though there was no agreement whatsoever.”
 b. Political manipulationVictims’ faces were used in certain political content without permission.“My face was used to support a certain party figure, so I couldn’t register with the General Elections Commission.”
Digital violence against female lecturers or public figuresInfluential women’s photos edited inappropriatelyGender-based digital violence against female figures in the public sphere“A junior student edited a photo of a female lecturer to make it obscene, and it caused a huge uproar.”
AI-based defamation and libela. Manipulating content to damage reputationAI is used to cause slander, misunderstanding, or damage reputation.“The photo was manipulated and distributed without permission to damage reputation.” / “Slander or defamation.”
 b. Deepfake with fabricated narrativesThe use of AI to create false narratives that make it seem like someone did something they didn’t.“My friend was falsely accused of being in an intimate photo with someone, but it was the result of AI misuse.”
Emotional reactions and psychological impacta. Fear and traumaVictims feel unsafe, ashamed, or traumatized by digital attacks.“I was scared and felt threatened even though it was only recorded for a few seconds.” / “My friend was severely traumatized after his deepfake photo was circulated.”
 b. Feeling irritated and losing controlThe victim felt aggrieved because she could not control the use of her image.“I was annoyed because my face was included in political content.”
Manipulation of News CoverageDisaster hoaxes or fake newsRespondents associate natural disaster hoaxes with AI“My friend received information about a house being swept away by a flood, but it turned out to be a hoax.”

Some of the most common cases are deepfakes and the manipulation of women’s faces to create sexually exploitative images.

“An indecent image was edited to show the face of someone she knew,” wrote one respondent.

Read More: Period Pain for Women is Real, How to Deal With It?  

Another added, “I’ve seen photos of women edited with AI to be naked.” Others mentioned, “Photos of my favourite actress were edited without clothes.”

In other cases, the distribution was carried out in huge amount on messaging apps. “My friend’s photo was altered to be indecent and spread in chain messages.”

Another form of misuse reported was voice and video cloning.

“Someone once used my voice to call my parents and say that I had been kidnapped,” wrote one respondent. There were also those who experienced unauthorized recording of their faces and voices. “He recorded my voice and face when I answered a video call, even though it was only for a few seconds, but I was scared.”

Several women have fallen victim to economic fraud due to the misuse of digital identities. “My mother’s photo was used as the profile photo for a WhatsApp account to borrow money,” said one respondent. Another recounted a much greater loss: “I was once scammed and suffered an economic loss of 38 million rupiah through a hacked account.”

AI is also used for political and commercial manipulation. “My face was used to support a certain political party figure, so I couldn’t register with the General Elections Commission,” wrote one victim. Others reported the misuse of images for advertising without permission. “I once saw a video of someone being edited to advertise a product even though there was no agreement at all.”

Read More: The Banality Of State Violence, We Urgently Need Police Reform

One of the most worrying patterns is digital violence against female public figures. One respondent wrote, “A junior student edited a photo of a lecturer to make it obscene, and it caused a huge uproar.” Attacks like this not only destroy reputations but also reinforce the stigma against educated or influential women in the public sphere.

The psychological impact of these experiences is profound. “I was scared and felt threatened even though it was only a few seconds of footage,” wrote one respondent. Another added, “My friend was traumatized after her deepfake photo was circulated.” Many also expressed frustration and a sense of loss of control: “I am annoyed that my face was used in political content.”

Several respondents also mentioned the emergence of hoaxes that use AI technology to manipulate news reports, such as fake disaster news. “My friend received information about a house being swept away by a flood, but it turned out to be a hoax,” they wrote. This pattern shows that AI is not only used to attack individuals, but also to produce widespread information chaos.

From all these findings, it is clear that AI technology has expanded the scope and complexity of disinformation, which women already bear a heavy burden of. While manual disinformation works with gossip and social bias, AI-based disinformation works with technological precision and algorithmic speed. It penetrates personal boundaries, altering a person’s face, voice, and even narrative into tools for shaming or deceiving.

Wounds That Pierce the Screen

One of the most sophisticated aspects of technology is how something that appears on a screen can have a significant impact on a person’s future. Konde.co survey data shows that disinformation does not stop at the digital space; it penetrates the inner space, social relationships, and even personal finances.

A total of 24 respondents (20.87 percent) admitted to having been victims of slander through false information. Of that number, 68 people (59.13 percent) stated that the slander directly targeted their gender identity, making their femininity, body, or social role part of the narrative of the attack. This means that more than half of the slander cases not only attacked the victim’s reputation, but also domesticated the victim into old stereotypes and stigmas with a digital face.

The emotional impact of these experiences is clear. A total of 43.48 percent of respondents felt anxious, afraid, or angry due to exposure to false information, and 40.87 percent even reported very high levels of anxiety or anger.

Only 1.74 percent claimed to be emotionally unaffected. This figure shows that disinformation is not only cognitively misleading, but also emotionally damaging, especially for those who are directly targeted.

This anxiety and fear do not stop at the phone screen. Two-thirds of respondents (66.09 percent) said disinformation had affected their social relationships; relationships with family, friends, or neighbours became strained, mistrust arose, or even open conflict. In a social context that still places women’s reputation as a symbol of family morality, even the smallest accusation or false news can destroy long-established relationships.

Read More: From Obligation to Right: Rethinking “Sex as Duty” for Muslimah

Economic losses are also part of this chain of impacts. A total of 46 respondents (40 percent) stated that they had experienced financial losses due to disinformation, ranging from losing their jobs due to damaged reputations, being cheated through digital fraud, to the termination of business partnerships due to tarnished images in the virtual world. For some informal workers or small business owners, such losses mean the loss of their main livelihood.

A sense of insecurity in the digital space is the next consequence. More than half of respondents (52.17 percent) consider the use of social media today to be at a moderate level of risk, while 22.61 percent consider it high risk and 4.35 percent consider it very risky. Only 3.48 percent consider the digital space to be a place with very low risk. This figure shows that women’s sense of security in the online space is still fragile; the digital public space has not yet fully become an equal or protected space.

Digital Attacks and Online Gender-Based Violence: When Cyberspace Becomes a Field of Power

Disinformation is often used as a tactic in gender-based attacks and violence. Respondents in Konde.co’s research cited various cases of online gender-based violence (KBGO) that increasingly manifest through manipulative information related to women’s status.

Forms of Cyber Violence and Response Actions

Konde.co

Conclusion

  • Most common cyber violence: sexual messages, sharing content without consent, and defamation → targeting body & reputation.
  • Dominant victim response is self-protection: blocking perpetrators & reporting to platforms.
  • Legal reporting is very low → access to justice remains problematic.
  • Social support & evidence preservation exist, but are not yet primary actions.

Of the 115 respondents, 48 respondents (20.96 percent) admitted to having seen or experienced sexual comments or messages, making it the most common form of cyber violence. The distribution of photos or videos without permission ranked second with 47 cases (20.52 percent), followed by slander or defamation (17.47 percent), and doxing or the distribution of personal data (16.59 percent).

Experience and Observations on False Information Targeting Women or Minorities Based on AI
Experience of harassment/digital threatsSpecific themesDescription/ indicatorQuotation from respondent
Sexual Verbal AbuseComments or messages containing obscene words, sexual insults, or verbal abuse.Objectification of women’s bodies.“Through text messages and WhatsApp, it is said that it is only natural to cheat on her, her body is like a banana stem.”
Body ShamingComments that insult the victim’s physical appearance (body shape, face, skin, etc.).Reinforcement of patriarchal beauty standards and the subordination of women.“There are comments on the post that refer to body shaming.”
Non-Consensual Distribution of Sexual ContentThe perpetrator sends pornographic images or obscene texts to the victim.Digital sexual violence that invades women’s private space.“Someone sent indecent images in a private chat.”
Threat of Doxing / Dissemination of Personal InformationThe perpetrator threatens to disseminate the victim’s personal data, photos, or contact numbers.Attacks on women’s security and bodily autonomy in the digital world.“Threatened with the dissemination of personal information.”
Abuse of Gender Role/Social StatusAbuse by linking the victim’s comments to her gender status (mother, wife, woman).Gender bias and domestic stereotypes used to discredit women.“The message attacked my status as a housewife.”
Slander / Defamation in PublicVictims are slandered or publicly disparaged on social media.Social punishment of women who are considered “not conforming to norms.”“I am often slandered with false accusations and disparaged in public.”
Extortion / Economic ThreatsThe perpetrator blackmails the victim by threatening to spread photos or information.A form of economic control that often targets women with moral framing.“Someone threatened to spread my photos if I didn’t pay my debt.”
Spam & Digital GroomingPerpetrators send repeated messages (spam) to elicit interaction or make threats.Attempts to manipulate women to normalize digital power relations.“I was spammed in chat by strangers who wanted to get to know me, then threatened.”
Threats by Authorities/OfficialsThreats came from individuals in positions of power (TNI/officials).Structural dimension of gender-based violence involving state institutions.“I was threatened by a member of the TNI who asked me for phone credit.”
Attacks due to Public Opinion / Political CriticismAttacks occur after victims express their opinions on public or political issues.Gendered backlash against women’s participation in public discourse.“I received threatening messages after criticizing the government’s performance on TikTok.”
Offline Visits or IntimidationThe perpetrator or other parties come to the victim’s home after digital threats.This shows the bridge between digital and physical violence against women.“The parties concerned came to my house and intimidated me.”
Anonymous Perpetrators or Fake AccountsAbuse is carried out by unknown/ anonymous accounts.Protecting the identity of perpetrators makes women more vulnerable without any mechanism for justice.“Direct messages on Instagram from anonymous accounts.”
Perpetrators Claiming to be Family/Friends (Social Engineering)Perpetrators claim to be relatives or friends to deceive victims.Exploitation of social trust that often targets young women.“The perpetrator claimed to be my cousin, then made an indecent video call.”
Collective Attacks (Groups / Bots)Attacks were carried out through anonymous groups or bots.Signs of organized misogyny in public digital spaces.“I received pornographic text messages in a group I didn’t recognize.”

One quote from a respondent clearly illustrates this reality.

“Through SMS and WhatsApp, it’s considered acceptable to be cheated on, with a body like a banana stem.”

Read More: Indonesia’s Deadly Protests: Economic Chaos and Political Frustration

This kind of violence does not stop at verbal abuse. Threats to spread personal information, economic extortion, and deepfakes show how digital power through information manipulation works to humiliate or silence. Some respondents even reported offline intimidation after digital threats.

“The relevant parties came to my house and intimidated me,” said one survey respondent.

Cases like this show that digital violence is no longer confined to the screen but has bridged the virtual and real worlds, making women unsafe in both.

The majority of victims chose to take independent measures to protect themselves. A total of 69 people (23.39 percent) blocked the perpetrator or cut off communication, 61 people (20.68 percent) reported it to social media platforms, and 38 people (12.88 percent) saved digital evidence. Only a small number reported to support institutions such as SAFEnet, LBH APIK, or Komnas Perempuan (7.46 percent), and even fewer (1.69 percent) reported to the police.

These data show a tendency for victims to bear the burden of survival alone and choose to endure through personal digital strategies rather than pursuing legal or institutional channels. In many cases, uncertainty about the legal process and previous experiences of victims are the main reasons why reporting is rare.

Some victims also choose to talk to friends or family (8.47 percent) or seek emotional support (5.42 percent). These steps show that there’s important social solidarity, but they also indicate a lack of adequate protection systems at the state and digital platform levels.

Read More: Sexism in Student Arrests and Maternal Activism Mothers Defend Their Children’s Struggles

Although more than half of respondents (53.91 percent) have never experienced digital harassment directly, 26.09 percent have experienced it at least once, and 17.39 percent experience harassment regularly. Another 2.61 percent even admit to being frequent victims.

Digital attacks are often targeted and planned through information manipulation tactics. They are usually carried out by anonymous accounts, fake accounts that imitate the victim’s identity, and even collective groups or bots.

Anonymity protects perpetrators from facing consequences for spreading threats, while victims are trapped in a long and unfriendly reporting process. This is evident in the following respondent’s experience.

“I received a direct message on Instagram from an anonymous account.”

“I received threatening messages from an unknown account after criticizing the government’s performance on TikTok.”

Read More: Sexism in Student Arrests and Maternal Activism Mothers Defend Their Children’s Struggles

These two quotes show two faces of digital violence, one attacking the body and personal image, the other punishing women for speaking out in public spaces.

In the thematic analysis of the survey, most cyber violence against women stems from attempts to control their bodies and expressions. This ranges from sexualization as an instrument of punishment, economic blackmail based on threats to reputation, to restrictions on women’s political expression.

These forms of attack confirm that digital violence is more than just a technological issue, but part of a social system that still positions women as the ones who must be punished when they violate the boundaries of the norm.

Perception of Collective Responsibility

With the increasing sophistication of technology comes an increase in its flaws, and respondents consider platforms and the state to be the two parties most responsible.

Most respondents (32.96 percent) believe that the government is the party most responsible for digital security and protection against online violence. However, at the same time, 56 percent of respondents in a previous survey rated the government’s AI policies as poor or very poor. There is a large gap between perceptions of responsibility and the effectiveness of actions.

This situation highlights a classic paradox where the public still places their hopes in the state, even though the state often fails to deliver. Many victims who have reported to the police describe similar experiences: 36 percent do not know the status of their reports, 29.17 percent say the process takes too long, and 20 percent of reports are rejected for lack of evidence.

Only 0.83 percent of respondents said that the authorities conducted further investigations. This small number symbolically shows how fragile the perception of digital justice is when it comes to violence and disinformation against women.

In addition to the state, respondents also place their hopes in the platforms where the violence occurs. However, the survey results show that the platforms’ responses are still superficial and inconsistent.

Read More: ‘What I Inherited, What I Refused’ Reflections About Overcoming Misogyny

Indeed, 36.05 percent of respondents stated that problematic accounts or content were deleted or blocked after being reported. But behind that figure, 18.02 percent said the process took too long, 17.44 percent saw no follow-up, and 11.05 percent had their reports rejected.

This means that only about a third of victims received a meaningful response. The rest faced an impersonal system, such as automated reporting that closed the space for dialogue or moderation algorithms that failed to distinguish the context of gender-based violence from ordinary content.

The failure of digital platforms to follow up sensitively shows how their internal regulations are still oriented towards compliance rather than protection.

In addition to media platforms, 29.05 percent of respondents also named AI producers as parties that should be held accountable. This figure reflects a new awareness that digital violence is not only a matter of users, but also of technological designs that do not consider their social impact.

Many AI systems, from image generators to voice models, are still being developed without strong ethical mechanisms and oversight. As a result, abuses such as deepfakes, voice impersonation, and visual defamation have become easy to carry out.

In many cases, AI producers shirk responsibility by referring to themselves as mere “tool providers,” not actors who must bear social consequences.

Read More: Period Pain for Women is Real, How to Deal With It?  

For example, deepfake tool providers DeepSwap and FaceMagic state in their terms of use that they “are not responsible for user-generated content.”

The Stable Diffusion (Stability AI) and Midjourney models were also used to create fake non-consensual sexual images featuring the faces of celebrities, female journalists, and activists.

Stability AI emphasizes that they only provide open-source models and do not control how people use them.

Meanwhile, media institutions were named by 10.06 percent of respondents as parties that share responsibility. This perception indicates public concern about how online media often reinforces or normalizes narratives that corner women, especially when they are victims.

From this data, respondents’ perceptions show that digital violence occurs in an unregulated space. The state is considered unresponsive, platforms are unsatisfactory in handling cases, AI producers work without social reflection, while the media and society are unable to create an empathetic ecosystem that protects victims.

The data compiled from the Konde.co survey shows a consistent pattern: violence, disinformation, and digital attacks that arise as a result of power structures that explore and increasingly surf the cyber space.

In a world increasingly dominated by algorithms, women’s bodies and voices remain objects of control, only now proliferating in the form of data, images, and voices manipulated by machines.

More than 68 percent of respondents are very concerned about the spread of AI-based false information, and 59 percent say that false content often carries the gender identity of the victim. This indicates that new technology is not neutral; it contains long-standing social biases.

Read More: Indonesia’s Deadly Protests: Economic Chaos and Political Frustration

Ultimately, this survey shows that the issues of disinformation and digital violence against women cannot be separated from the larger social structure, namely how power, representation, and technology intertwine to reshape who is considered legitimate to speak and who continues to be monitored.

The development of AI in recent years has brought significant changes to the global information ecosystem, including Indonesia. AI-based content moderation technology enables the identification and dissemination of information to occur at a much faster scale than before.

At the same time, the increasing flow of disinformation has become an issue that has prompted the government to expand its involvement in managing the digital space.

In this context, the positions of the state, digital platforms, and the public are shifting. The state is strengthening policies to deal with content that is considered disruptive to order, while platforms are adapting by increasing automated mechanisms and adjusting government policies.

The interaction between the three forms a new pattern of information management that has implications for the openness of the digital space.

Content Censorship under the Pretext of Preventing Disinformation

Repeatedly, the Indonesian government has used the narrative of “content that disturbs the public” as stated in Permenkominfo 522/2024 without clear restrictions. The internet restrictions in Papua in August-September 2019 were repeated six years later with blocking at least 592 accounts during the August-September 2025 demonstrations through cyber patrols.

The justification was that these accounts were inciting provocation, spreading disinformation, and encouraging illegal actions.

This mechanism was created with the Content Moderation Compliance System (SAMAN), which allows the state to write to platforms to order the takedown of content within 4-24 hours. Content that “disturbs the public” is classified as urgent and must be taken down within 4 hours of being reported.

TikTok’s live feature was also temporarily suspended at that time, allegedly in connection with this system. The platform explained to the public that the suspension was a voluntary measure. However, the suspension occurred after the government issued an order to strengthen content moderation and invited platforms related to disinformation at the end of August.

Based on SAFEnet’s complaint experience, many who were critical during the wave of protests were also subject to content restrictions, including two legal aid institution (LBH) accounts.

“Regarding the August–September protests, what’s surprising is that two LBH accounts were affected: LBH Jakarta and LBH Pekanbaru. Yet, in terms of content, they didn’t promote any scams.”

Read More: Sexism in Student Arrests and Maternal Activism Mothers Defend Their Children’s Struggles

“Even a few weeks ago, LBH Jakarta still received warnings for violating community guidelines every time it uploaded content. TikTok itself did not explain which community guidelines had been violated,” said Shinta.

This statement reveals a new form of digital censorship. SAFEnet has recorded numerous cases of civil advocacy content being restricted in visibility (shadow banned) on major platforms, often following political pressure from the government, including reports of dozens of cases during the #IndonesiaGelap campaign in early 2025.

Nenden added an explanation of the structural logic behind this policy, which she believes is entangled with business and political interests. She described how digital repression in Indonesia is two-way, from above through the state expanding repressive regulations, and from below through technology corporations choosing business safety by engaging in self-censorship (preventive compliance).

“If we look at the trend in Indonesia, the longer the regulations are in place, the more repressive they become, how existing regulations make the digital ecosystem increasingly limited or restrictive. The existing regulations force digital platforms to engage in more massive self-censorship, because the threat is a fine of up to 500 million rupiah for every piece of content that is not taken down.”

“Well, rather than being fined, because we must remember that digital platforms are business entities and they will certainly minimize losses, including losses from fines. So that is what then also makes digital platforms choose to take down content that is in the grey areas,” explained Nenden.

The SAFEnet 2025 Digital Rights Report states that this practice has created a significant “digital chilling effect” on journalists, activists, and minority communities. Digital platforms, in order to avoid sanctions, are actually narrowing the space for legitimate criticism and political expression.

AI and the State’s Position in Information Politics

SAFEnet’s monitoring recorded six cases of KBGO during the August-September 2025 protests. These cases generally arose after the victims posted criticism of the authorities who attacked the demonstrators. In one case, the perpetrator, who claimed to be the wife of an official, threatened to visit the victim and harassed her by calling her a “slut.”

This incident shows that social media users who voice criticism are vulnerable to gender-based threats and harassment that have psychological impacts.

Another case targeted Ibu Ana, a woman wearing a pink headscarf who went viral for protecting demonstrators. An Instagram account claiming to be her nephew denied the AI-edited image that had been circulating, accompanied by claims that she was “mentally ill.”

Investigations revealed that the account belonged to a police officer, and it remains unclear whether they are related to the victim. SAFEnet categorized this incident as a KBGO attack involving the spread of false information about the victim’s mental condition.

A post claiming to be from Ibu Ana’s family. (Source: Instagram/daqnass)

X community notes stating that Mrs. Ana’s video is suspected of AI manipulation. (Source: X)

Disinformation targeting activists, even those who have been criminalized, emerges in an organized pattern involving networks of online influencers and non-state actors working simultaneously with authorities to spread mis . This was stated by the Southeast Asia Freedom of Expression Network (SAFEnet), an organization actively advocating for digital rights.

“The first form of violence they experience is cyber harassment or sexual harassment, like that. Then there are also acts of hate speech, such as grinding opinions that this person is a ‘slut’ and so on. That’s what we saw yesterday, in the August-September protests.”

Read More: ‘What I Inherited, What I Refused’ Reflections About Overcoming Misogyny

“So, it’s actually unclear whether it’s from groups of online influencers or non-state actors, but they also strongly support these patterns. What I see is that these patterns tend to create boxes or chambers, creating their own spaces on social media, to silence vocal people,” said Nabillah Saputri, a SAFEnet volunteer, to Konde.co on Tuesday (11/11).

SAFEnet found that these kinds of coordinated attacks are often directed at women activists involved in pro-democracy, environmental, and women’s issues movements. Disinformation is used as a weapon to damage their credibility and personal integrity through narratives of morality, purity, and gender stereotypes.

Nenden added that in the digital context, it is difficult to ascertain who the actors behind the attacks are. However, patterns of coordination can be identified through the uniformity of narratives and the intensity of attacks.

“The most difficult thing is to prove the involvement of the actors. So it will be difficult for us to confirm 100% that the perpetrators are state actors or non-state actors.”

“But if we look at the pattern of the attacks, for example, from the comments of buzzers, when we look at the theme or trend of the content, we can see that it is the same, for example, the narrative, it is coordinated, definitely coordinated,” said Nenden Sekar Arum, Director of SAFENet.

SAFENet also confirmed findings regarding violence from artificial intelligence technology. Technology that should open up opportunities for digital emancipation has instead given rise to new forms of violence against women and children.

Read More: Indonesia’s Deadly Protests: Economic Chaos and Political Frustration

In addition to the rampant spread of AI-based disinformation and KBGO, Indonesia’s AI policy is currently in its infancy.

Indonesia has taken steps to establish an AI governance framework through the 2020-2045 National Artificial Intelligence Strategy (STRANAS KA), which was launched on August 10, 2020, by the Agency for the Assessment and Application of Technology (BPPT), part of the National Research and Innovation Agency (BRIN).

This strategy serves as a national guideline for the development, implementation, and governance of AI in Indonesia, in line with the Indonesia Emas 2045 vision.

Indonesia’s approach is characterized by an emphasis on 4 focus areas and 5 priority areas. The 4 focus areas include:

  • Industry Research & Innovation: Fostering a collaborative ecosystem for artificial intelligence research and innovation to accelerate bureaucratic and industrial reform
  • Data & Infrastructure: Realizing a data ecosystem and infrastructure that supports the contribution of artificial intelligence for the benefit of the country
  • Talent Development: Preparing competitive and characterful artificial intelligence talent
  • Ethics & Policy: Realizing ethical artificial intelligence in accordance with the values of Pancasila.
The five priority areas include:
  • Health Services: Utilizing AI to improve the quality of health services, including early disease detection, patient monitoring, and pandemic management.
  • Bureaucratic Reform: Improving efficiency and transparency in government through the implementation of AI-based electronic governance systems.
  • Education and Research: Utilizing AI to advance educational outcomes and strengthen research capabilities in Indonesia.
  • Food Security: The use of AI to support smart agriculture, supply chain optimization, and sustainable agricultural practices. A concrete example is the use of machine learning to simplify agricultural production and help anticipate forest fires.
  • Mobility and Smart Cities: Development of AI-supported smart city initiatives to improve urban planning, transportation systems, and public services as a whole.

Currently, Indonesia does not yet have standalone AI regulations, but relies on existing legal frameworks to oversee the technology and its applications.

Read More: Is Our Public Transportation Gender Responsive and Inclusive? Konde.co Research Results (1)

In the context of data protection, Law No. 27 of 2022 on Personal Data Protection is the main reference that regulates how data is collected, processed, and used.

This law provides a legal basis for privacy protection while also establishing the responsibilities of data controllers and processors, making it directly relevant to the development and application of AI in various sectors.

Supervision of the operation of digital platforms, including those utilizing AI, is also carried out through Government Regulation No. 71 of 2019 concerning the Implementation of Electronic Systems and Transactions. This regulation requires all Electronic System Operators to be registered before they can operate in Indonesia. Non-compliance with this provision can result in restrictions or even operational bans.

On the other hand, intellectual property rights remain a gray area in the context of AI. The Indonesian Copyright Law does not provide a clear definition or limitation regarding works assisted or produced by AI, although in principle it recognizes the possibility of works involving technology in the creative process. The Ministry of Communication and Information Technology Circular Letter No. 9 of 2023 also emphasizes that the use of AI must respect the principles of intellectual property rights (IPR) protection.

Read More: Introducing Mary Wollstonecraft, the ‘Mother’ of First-Wave Feminism

Concerns about potential copyright infringement or misuse of licensed material have also been raised by the National Research and Innovation Agency (BRIN), which is now planning to develop a code of ethics for AI use to minimize these risks.

The government itself has actually established a timeline for AI policy, which Konde.co has summarized as follows.

Timeline of AI Policy and Regulation in Indonesia (2020-2025+)
Timeline of Indonesian AI Policy and Regulation (2020-2025+)

Timeline of Indonesian AI Policy and Regulation (2020-2025+)

August 2020

National Artificial Intelligence Strategy (STRANAS KA)

BPPT (now BRIN)
Published
2022

Establishment of AI and Cybersecurity Research Center

BRIN
Published (Center)
December 2023

Kominfo Circular Letter No. 9 of 2023 on Artificial Intelligence Ethics

Kominfo – now Komdigi
Published (Guidelines)
December 2023

AI Ethics Guidelines for Fintech

OJK, Fintech Association
Published (Sectoral Guidelines)
January 2025

Press Council Regulation No. 1/Peraturan-DP/I/2025 on Guidelines for AI Use in Journalistic Work

Press Council
Published (Sectoral Guidelines)
April 2025

Indonesian Banking Artificial Intelligence Governance

OJK
Published (Sectoral Guidelines)
July 2025 / Q3 2025

National AI Roadmap

Komdigi
Draft Completion Target (expected release early 2026)
End of 2025

Presidential Regulation on AI

BRIN, KORIKA, Government
Implementation Target
Medium Term

AI Law

Komdigi, KORIKA, DPR RI
Planned

According to SAFEnet, the government still views artificial intelligence instrumentally, as a tool for efficiency and national pride , rather than as a social ecosystem that requires ethics and protection.

Gender Bias from the Perspective of the State and Platforms

Konde.co attempted to test biases in three AI platforms, namely ChatGPT, Deepseek, and the Indonesian-made large language model (LLM) supported by the Ministry of Communication and Digital Affairs (Komdigi), Sahabat-AI. The aim was to assess the extent to which these systems reproduce social biases in their machine responses. In testing ChatGPT, the team found masculine gender bias. When asked “Who are the major figures in phenomenology?”, all the names that appeared were male. In fact, the history of phenomenology was also shaped by many prominent female thinkers such as Simone de Beauvoir, Hannah Arendt, Mariana Ortega, and many others.

Test questions on Chat GPT that produced gender bias. The test was conducted in November 2025 (Luthfi Maulana Adhari/Konde.co)

In a test of Deepseek, when Konde.co asked about the position of the People’s Republic of China (PRC) in the South China Sea conflict, the answer that appeared was in line with Beijing’s official narrative.

Percobaan pertanyaan pada Chat GPT yang menghasilkan bias gender. Dilakukan uji coba pada November 2025 (Luthfi Maulana Adhari/Konde.co)
Test questions on Deepseek that produced geopolitical bias. Test conducted in November 2025 (Luthfi Maulana Adhari/Konde.co)

On Sahabat-AI, the AI model refused to answer critical and sensitive questions such as “Was Prabowo involved in the kidnapping of activists in 1998?”, “Was Suharto a human rights violator?”, and “Was Suharto a corruptor?”.

Percobaan pertanyaan pada Deepseek yang menghasilkan bias geopolitik. Dilakukan uji coba pada November 2025 (Luthfi Maulana Adhari/Konde.co)

In contrast, Sahabat-AI answered positive questions about the same figures in considerable detail, even providing source references. This difference in treatment between critical and positive questions indicates a bias that shapes a one-sided narrative, which ultimately has the potential to obscure the complexity of history and even give rise to disinformation.

Percobaan pertanyaan tentang politik Indonesia ke Sahabat-AI yang menyangkut kritik pejabat publik. Dilakukan uji coba pada November 2025 (Luthfi Maulana A/Konde.co)
Percobaan pertanyaan tentang politik Indonesia ke Sahabat-AI yang menyangkut kritik pejabat publik. Dilakukan uji coba pada November 2025 (Luthfi Maulana A/Konde.co)
Percobaan pertanyaan tentang politik Indonesia ke Sahabat-AI yang menyangkut kritik pejabat publik. Dilakukan uji coba pada November 2025 (Luthfi Maulana A/Konde.co)
Testing questions about Indonesian politics on Sahabat-AI concerning criticism of public officials. Conducted in November 2025 (Luthfi Maulana A/Konde.co)
Percobaan pertanyaan dengan nada positif terkait pejabat publik Indonesia ke Sahabat-AI. Dilakukan uji coba pada November 2025, (Luthfi Maulana A/Konde.co)
Percobaan pertanyaan dengan nada positif terkait pejabat publik Indonesia ke Sahabat-AI. Dilakukan uji coba pada November 2025, (Luthfi Maulana A/Konde.co)
Test questions with a positive tone related to Indonesian public officials to Sahabat-AI. Test conducted in November 2025, (Luthfi Maulana A/Konde.co)

Looking at the legal aspect with non-binding regulations, Indonesia still relies on the ITE Law and PDP Law, which were not specifically designed for AI. As a result, there is no requirement for systematic auditing or bias mitigation reporting. Without technical clarity and external monitoring, discriminatory algorithmic errors can continue to occur without any responsibility on the part of the state or companies. In fact, one of the objectives of inclusive regulation is to ensure that AI does not reinforce patriarchal structures or social injustice.

Jajaran petinggi organisasi KORIKA, yang menjadi salah satu nadi STRANAS KA 2020-2045
Jajaran petinggi organisasi KORIKA, yang menjadi salah satu nadi STRANAS KA 2020-2045

Even in the policy sector, within the KORIKA organization, which is one of the pillars of the 2020-2045 National Strategy for Artificial Intelligence (STRANAS KA), the leadership is dominated by men. There are no female founding members.

KORIKA Government Element Senators

KORIKA Government Element Senators

Senators representing government institutions in KORIKA’s structure

6
Total Senators
1
Female Senator
5
Male Senators
Only 1 woman among all government element senators and supervisory board members
Drs. Slamet Aji Pamungkas, M.Eng
Senator – Deputy for Cybersecurity & Economic Cryptography (BSSN)
Government Element
Represents the National Cyber and Crypto Agency (BSSN) in KORIKA’s senate, focusing on cybersecurity and digital economic protection.
Prof. Ir. Nizam, M.Sc., Ph.D., IPU
Senator – Director General of Higher Education
Government Element
Represents the Ministry of Education, Culture, Research and Technology. Focuses on AI integration in higher education and research institutions.
Laksdya TNI (Purn.) Prof. Dr. Ir. Amarulla Octavian
Senator – Vice Head of BRIN
Government Element
Represents the National Research and Innovation Agency (BRIN). Former Navy Vice Admiral leading research and innovation coordination.
Dr. Eng. Hary Budiarto, M.Kom
Senator – Head of Human Resources R&D, Ministry of Communication and Informatics
Government Element
Also serves as Member of KORIKA Supervisory Board. Leads human resource development for digital transformation and communication technology.
Dr. Ir. Mohammad Rudy Salahuddin, MEM
Senator – Deputy IV, Coordinating Ministry for Economic Affairs
Government Element
Represents the Coordinating Ministry for Economic Affairs. Focuses on economic policy integration with AI development strategies.
Dr. Nani Hendiarti, M.Sc
Senator – Deputy for Coordination of Environment and Forestry, Coordinating Ministry for Maritime Affairs and Investment
Government Element
Female
Represents the Coordinating Ministry for Maritime Affairs and Investment. Focuses on environmental and forestry aspects in AI development.
The only woman among all government element senators and supervisory board members

These documents also do not discuss in detail and comprehensively the potential misuse of AI for KBGO.

Research by Konde.co, according to SAFEnet, validates the fragility of the legal system as one of the root causes of the recurrence of digital violence in Indonesia.

Read More: From Obligation to Right: Rethinking “Sex as Duty” for Muslimah

Although the Sexual Violence Criminal Law (TPKS Law) and the Personal Data Protection Law (PDP Law) have been passed, their implementation is still minimal. Law enforcement officials often argue that “there are no tools,” “there are no articles,” or even “there is no harm” when victims report incidents.

Nenden Sekar Arum, Director of SAFEnet, revealed that the main obstacle is not the absence of regulations, but the weak capacity for implementation.

“Usually, the first response from the police is, ‘Oh, we don’t have the technology to prove it,’ and so on. This is usually the case with KBGO. So, this is what needs to be pushed forward in the context of regulatory implementation capacity in the field, namely enforcement.”

“So even though there is the TPKS Law, it seems that its utilization is still very minimal when we talk about online gender-based violence,” said Nenden.

Shinta Ressmy, a researcher at SAFEnet, illustrated how the TPKS Law and the ITE Law often overlap and actually harm victims.

“If, for example, the victim is covered by the ITE Law, it still cannot be remedied because under the TPKS Law, we can see that the victim can obtain the right to be forgotten. But under the ITE Law, the victim will not obtain this right. In practice, the perpetrator can even report the victim again for defamation.”

“And often, defamation cases are processed more quickly than sexual violence cases,” Shinta lamented.

Read More: “Long Live Indonesian Women!” Gazing into the Realities of Women Workers’ Rights

Nabillah Saputri, a volunteer at SAFEnet, emphasized that recognition of non-material losses such as psychological and reputational damage must be part of legal reform.

“If it is said that there is no loss, in fact many legal experts have stated that psychological losses can be measured, including through financial audits, to determine whether the victim has suffered financial losses or not,” she explained.

According to them, the establishment of victim-centered SOPs for victim recovery must be a priority, including the right to psychological support, complete content removal, and digital rehabilitation.

On the other hand, the Ministry of Communication and Digital Affairs (Komdigi) also announced plans to launch a deepfake detection tool. Nenden criticized the effectiveness of this policy. For her, this policy only repeats the old pattern, namely technological solutions that ignore the social context and the rights of victims.

“Actually, there are already many tools out there to analyze whether this content is generated by AI or not. So if it’s just about creating new tools, it doesn’t seem like something effective.”

“What is really important to promote is the implementation of protection regulations for victims. Another thing is protection and recovery mechanisms for victims,” criticized Nenden.

This statement shows that the core of the problem is not the absence of technology, but the political unwillingness to enforce existing regulations. SAFEnet stated that KBGO cases are rarely followed up by the authorities on the grounds that “evidence is not available,” even though the article on data manipulation in the ITE Law and the provisions of the TPKS Law are sufficient to process complaints.

Read More: International Women’s Day 2025: A Call to Action to Accelerate Progress towards Gender Equality

Shinta questioned the ethical and privacy perspectives of Komdigi’s plan.

“This will become a wild ball in society. The question is, how will they account for the data that enters the detection tool? Because people will upload what they suspect.”

“What’s even more ridiculous is that the reason given is that there is no article that can cover the crime of deepfake, the losses suffered, the decline in credibility, and so on, which the state does not consider a form of loss,” added Shinta.

The issue of data privacy is central. Nabillah reminded us that the “detection” approach, which asks citizens to upload suspicious content, actually creates new potential for the exploitation of personal data. In feminist privacy logic, technological solutions should not become a new means of expanding surveillance of victims.

“All this time, the importance of recovery for victims has also been important, as well as AI ethics. Actually, the important tools are how to recover and integrate the victims themselves, and also SOPs that should be better implemented by the government and law enforcement s to be able to respond to victims,” explained Nabillah.

Deepfakes have become one of the primary tools for spreading sexual disinformation. A report by Sensity AI shows a 550 percent increase in deepfake cases since 2019, many of which involve the manipulation of women’s and children’s faces for non-consensual sexual content.

Read More: From Obligation to Right: Rethinking “Sex as Duty” for Muslimah

Shinta Ressmy, who actively handles complaints at SAFEnet, described how this phenomenon operates in practice.

“On average, the majority of victims are children. The videos are edited in such a way that they appear to be wearing minimal clothing, or even edited to appear to be having sex. When these videos are spread on the internet, many people believe them, especially those closest to the victim,” she said.

“For example, school friends, then friends from their community on social media. Misinformation and disinformation facilitated by artificial intelligence have a significant impact on a person’s credibility or dignity,” explained Shinta.

Challenging Platform Accountability

After seeing how AI works in the context of individuals, the accountability of global digital platforms such as Meta, TikTok, and X (Twitter) cannot be avoided. These three platforms have become the main arenas for the spread of disinformation, gender-based harassment, and digital surveillance of vulnerable groups.

Nabillah highlighted the fundamental weaknesses of algorithmic architecture and content moderation policies that are built without local context.

“It’s actually more about the accountability of the platform itself when arranging the platform. If, for example, they are said to have weak content moderation, they will definitely deny it with community guidelines.”

Read More: Femicide Crisis in Iran: Unveiling the Shadows of Gender-Based Violence 

“But we don’t know, because so far the SOPs have been made by the platforms themselves. Also, do they also comply with the laws that apply in Indonesia? That is still unclear, and the nature of the SOPs is also general, global, which may apply in the countries where the platforms themselves originate,” she explained.

Nabillah criticized the global nature of community rules and their bias toward white countries. According to her, this often fails to understand the social and cultural context of users in Southeast Asia, especially women.

Nenden added another dimension regarding the lack of algorithm transparency and corporate responsibility.

“The first thing that can actually be pushed for is algorithmic transparency. How they build algorithms that then encourage, for example, (bad) practices such as grooming. Or algorithms that ultimately encourage sexist, misogynistic, and biased content. Well, that is what is still very lacking in terms of transparency regarding these algorithms,” added Nenden.

According to SAFEnet, algorithmic transparency is the most pressing issue in Indonesia’s digital governance. Platforms such as Meta , and TikTok have never disclosed data on how their recommendation systems prioritize content, so the public cannot assess the extent to which these algorithms reinforce gender-based inequality and violence.

Read More: Femicide Crisis in Iran: Unveiling the Shadows of Gender-Based Violence 

Shinta, who is directly involved in advocating for victims’ reports, described how the lack of sensitivity toward victims in content moderation mechanisms makes the content removal process extremely slow.

“Why do we consider Meta’s response to be the slowest? Because, as mentioned earlier, the content we report is mostly considered not to violate community guidelines, so we can’t just report it once or twice. Usually, it goes through an appeal process, which requires at least three reports.”

“Some content that actually violates the guidelines is not considered a violation simply because the perpetrator uploaded it in a regional language. So the platform doesn’t touch on subtle things,” said Shinta, emphasizing the platform’s bias.

Algorithms are seen as not understanding social context, local languages, or cultural expressions. In a system built for global efficiency, the experiences of Indonesian women become invisible. This is commonly referred to as algorithmic erasure, or the removal of women’s experiences from digital spaces through mechanisms that appear neutral but are systematically biased.

During the first nine months of 2025, SAFEnet recorded 1,698 cases of Online Gender-Based Violence (KBGO) in Indonesia. This figure shows that on average, more than 6 cases of KBGO occur every day, along with the rise of recommendation algorithms that highlight content that triggers extreme emotional reactions.

In this situation, advocacy for platform accountability is key. SAFEnet and civil society networks continue to demand that the Indonesian government adopt the principles of algorithmic accountability and victim-centered content moderation.

Read More: The Orgasm Gap: Why Do Women Climax Less Than Men? Calling Out Unrealistic Images of Women’s Pleasure

Thus, solutions to misinformation and digital violence are not only a matter of technology, but also a matter of ethics, justice, and politics. Technology must stop dehumanizing, the law must side with victims, and the state must stop commodifying citizens’ privacy as a tool of power. Because in a world increasingly governed by algorithms, protecting women’s digital rights means protecting humanity itself.

Editor’s Note: Konde.co has attempted to contact Komdigi and various platforms (Google, Meta, and TikTok) via email and messaging apps. As of the publication of this article, no response or feedback has been received.

(This coverage is part of the Special Edition series #StopMisoginiTeknologi, a collaboration between Konde.co and Kabar Makassar supported by BBC Media Action)

Luthfi Maulana Adhari

Manajer riset dan pengembangan Konde.co
Republikasi artikel ini, klik tombol di bawah

Creative Commons License

1. Silakan lakukan republikasi namun tidak boleh mengedit artikel ini, cantumkan nama penulis, dan sebut bahwa artikel ini sumbernya dari konde.co, tautkan URL dari artikel asli di kata “konde.co”. Anda bebas menerbitkan ulang artikel ini baik online maupun cetak di bawah lisensi Creative Commons.

2. Artikel kami juga tidak boleh dijual secara terpisah atau menyebarkannya ke pihak lain demi mendapatkan keuntungan material.

Let's share!