Showing posts with label Cybercrimes against women. Show all posts
Showing posts with label Cybercrimes against women. Show all posts

Sunday, September 1, 2019

3 ways how Artificial Intelligence may make women land in trouble

CYBER CRIME AGAINST WOMEN BY DEBARATI HALDER

Image result for artificial intelligence free images
Image curtsy : Google 

Information communication technology and digital communication technology have opened up new vistas for human relationships. The innovative technology with the help of Artifical Intelligence (AI) can now read minds,[1] predict illness,[2] predict crime occurrence,[3] enhance the professional and social network, and help in better analytical understanding of subjects. But it can also leave devastating impacts on human life. It can alter the data (including personal data), harm social reputation and can even instigate victims to take extreme steps like committing suicide.[4] All these may be done by positive and negative usage   of artificial intelligence which plays the base role for empowering Apps which in turn may be used for positive and negative usages.  Artificial Intelligence (AI) has been used by web companies like Facebook for facial recognition of users earlier. AI has also been used for companies (other than web companies) for processing employee data. In short, AI has been used to access private information of individuals either consensually or without consent. Here are three ways as how AI may create an uncomfortable situation for women specifically in India :
1.    Facial Recognition Apps and harassment of women: Remember the time when Facebook suddenly started asking for nude photos individuals for upgrading their own safety system apparently for providing safety mechanisms for subscribers?[5] This project was intended to build up a safety mechanism against revenge porn with the help of Artificial Intelligence. Facebook wanted to empower their subscribers, especially women to report revenge porn. But before that, the company wanted to ensure that the revenge porn content showcased the image that belonged to the victim specifically. The facial recognition app, the skin texture, hair color, biometric recognition technology would be matching both the images (the nude picture of the victim and the revenge porn content created by the perpetrator) and would be identifying the revenge porn content as illegal. But this project received stern objections because there were more possibilities of misuse of nude photos than positive use of the same. Facebook -Cambridge analytica case did prove that nothing is impossible when it comes to preservation of data by body-corporates and data of individuals is always profitable and the security of the  same is vulnerable. But this may not seem to be as dangerous as misuse of Face App may seem to be . FaceApp is basically used to change the face structure of the person whose photograph would be used in this App. It can change the texture of the skin and density of hair including facial hair.  In July, 2019, FaceApp became the center of concern for Indian cyber security stakeholders especially when several celebrities started using FaceApp and started showcasing their changed faces on Instagram.   While FaceApp was basically being used for fun purposes, it may also throw challenges for data safety and security of person concerned. FaceApp helps to change the structure of faces. But we should not forget that the altered facial image can be saved in devices and cloud of different individuals. This altered image may be used for several illegal activities. Predators may unauthorizedly access the social media profiles and change facial images of the victims to create fake profiles; they may also use such images to create a completely new impersonating profile to harass women. Altered facial images of women may also be used for revenge purposes especially when the victim is looking for opportunities in the entertainment or advertisement sector where her appearance may be considered as her biggest asset. Apart from this, FaceApp may be used to attract bullies and trolls to intensify victimization of women.
2.    Bringing back the memory: No one, but the web companies clearly remember what we posted in last summer. Every day social media companies would show what was posted by the user a year back or a couple of years back and would gently remind the user that he/she can share the said post as a memory. How does it happen? The web companies look for algorithm and the highest likes and comments for posts on daily or even hourly basis. When the posts earn more likes and comments, the AI decides to bring it forth. In certain situations, such refreshing of memories might not be ‘wanted’ at all especially when the victim might had a bitter ending of the relationship with persons in the said image or the text in question may no longer evoke good memories, but rather traumatize the victim more. But machine intelligence does not fail the company: it is a matter of consent and choice after all. But consider if the account is unauthorizedly accessed: the hacker may get to know something from the past which the victim may never wanted the hacker to know.
3.    Reminding the user about best low prices : AI runs over the internet like blood vessels carrying oxygen all over the body. When a user decides to compare prices of any product or services, AI helps to share the same almost always on any platform the user would be visiting. It might be extremely embarrassing for any woman if such searches start showing results when she is surfing the social media or even the search engine with a friend or another individual. Nothing is left by the AI from prices of lipsticks, hotels at cheaper rate, flight details to last watched videos on how to conceive. This might also make women face discrimination, office bullying and harassment due to several reasons.
These are but some of the many ways as how AI may make women to land in trouble. AI is necessarily connected with data privacy protection policies of web companies. The EU General Data Protection Regulation, 2018 provides that personal data may not be processed without the consent of the owner of the data.[6] But in this case, there can be legal tangles as web companies may  claim that they do not breach the data confidentiality or transfer the data to any other jurisdiction, neither they process the data without proper authorization. Here, multiple stakeholders may be involved which may include the original owner of the content or the picture which may have been processed for the purpose of harassment : the perpetrator, who may have carried out changes on the data using the AI supported Apps, perpetrators who may have unauthorizedly  stored the altered contents, picture or information or may have used the altered information, picture for creating impersonating profile etc. As per Indian legal understanding, altering, modifying etc of contents/ information/ image /images without proper authorization of the original owner of the  information etc may attract penal provisions under the Information Technology Act, 2000 (amended in 2008): these provisions may include Ss 43 (Penalty and compensation for damage to computer, computer system etc, ), 66 (computer related offences, 66C (punishment for identity theft) and 66D (punishment by cheating by impersonation by using computer resource etc. This may also attract penal provisions for Copy Right violation as well. Further, the web companies may be narrowly be liable for protecting data properly under several provisions including S.43A which speaks about body corporates liability to protect data. But irrespective of existing provisions, web companies may always escape the clutches of law due to due diligence clause and on the question of consent expressly or impliedly provided by the woman victim concerned. In the EU, courts are becoming more and more concerned about policy violations by web companies to fool the users. In India too, the courts must throw light on the web companies responsibility as data repository. Regulations like Data protection Bill, 2018 must be considered with utmost care. These may have the key to solve problems of online victimization of women.
Also, women users need to be extremely cautious about machine intelligence. Awareness must be spread about how the hidden ‘safety valves’ of the web companies (which may actually make the web companies more powerful against claims of lack of due diligence) may be used properly.  
 Please do not violate the copy right of this blog. If you need to use this blog for your writeup/assignment/project , then please cite it as Halder Debarati(2019) 3 ways how Artificial Intelligence may make women land in trouble. Published in in http://debaraticyberspace.blogspot.com

2019






[1] For example, see Nosta John (2019) A.I. Can Now Read Your Thoughts—And Turn Them Into Words and Images. Published @ https://fortune.com/2019/05/07/artificial-intelligence-mind-reading-technology/ on May 7, 2019
[2] For example, see PTI (2019), These AI tools can predict early death risk due to chronic diseases
Published @//economictimes.indiatimes.com/articleshow/68611835.cms?from=mdr&utm_source=contentofinterest&utm_medium=text&utm_campaign=cppst on March 28, 2019
[4] Halder D., & Jaishankar, K (2016.) Cyber crimes against women in India.
New Delhi: SAGE Publications. ISBN: 9789385985775
[5] See for example Solon Olivia (2017) Facebook asks users for nude photos in project to combat 'revenge porn'. Published in https://www.theguardian.com/technology/2017/nov/07/facebook-revenge-porn-nude-photos  on November 7, 2017
[6] For more, see S.7 of the EU GDPR . URL: https://gdpr-info.eu/art-7-gdpr/ Accessed on 17-08-2019


Monday, April 29, 2019

WhatsApp reporting of women and child abuse videos: The common understanding versus the reality

CYBER CRIME AGAINST WOMEN BY DEBARATI HALDER
Image credit : Google

Couple of days ago my friend shared an alarming news with me on Facebook about WhatsApp. It says that several cyber security think tanks including Cyber Peace Foundation are now finding out how WhatsApp groups are circulating child sexual abuse videos and how these contents are growing viral.[1] This is not an uncommon incident now. In 2015 from Centre for Cyber Victim Counselling, we had done an empirical research titled “Harassment via WhatsApp in Urban and Rural India: A Baseline Survey Report (2015).[2] This research was conducted in three cities namely Tirunelveli, Kolkata and Delhi with responders from the age group of 19-40. Even though this research did not include survey on WhatsApp groups , but it did emphasize on personal harassment or receiving of the sexually explicit images, harassing videos of others etc. Some of the findings of this report are as follows:
Ø 32.8% stated that they are aware of the safety tools in WhatsApp and 42.7% said they feel it is safer than other internet communication services. 41.2% stated that they were not aware of the safety tools and 13.7% stated that they don’t feel that WhatsApp is safer than other internet communication services. 1.5% did not want to tell about their knowledge of awareness regarding safety tools in WhatsApp and 11.5% did not want to tell about their feelings whether WhatsApp is safer than other internet communication services. 24.4% stated that they have heard about the safety tools in WhatsApp but have no direct knowledge about it. 32.1% stated that they have heard about other internet communication services, but they do not have direct knowledge, whether WhatsApp is safer because they do not use other services.
Ø In answer to the question whether they had received any sexually explicit or obscene images including videos/images of rape, sexual abuse of women or children or men or LGBT people etc, among the 131 respondents, 11.5% stated that they had received sexually explicit or obscene images, 51.9% stated they did not receive such images and 2.3% did not want to answer. 34.4% stated that they are not aware of being targeted with such images because they do not use WhatsApp or have stopped using the services.[3]

This suggests that WhatsApp had been a “chosen platform” by predators since long.
 But why WhatsApp has become more dearer to predators than other social media websites like Facebook or Instagram especially for those including pedophiliacs or persons who  create and circulate abusive videos including sexual abuse videos of women  ?  Let’s have a reality check about WhatsApp here:
Ø What is WhatsApp and how it works: As we had mentioned in the research report, WhatsApp messenger was started approximately in 2009 in the US by Jan Koumand Brian Action as a “better SMS alternative” (WhatsApp, 2014) and it is available for iPhone, Blackberry, Android, Windows phone, Nokia etc. This app uses the user’s phone number as the basic verification mode and it does not support calls via VoIP (Schrittwieser,Fr¨uhwirt, Kieseberg, Leithner, Mulazzani, Huber, & Weippl, 2014). Some of the basic features of WhatsApp include status update, profile picture update, uploading of address book (Schrittwieser, et. al., 2014), options to create/join groups (Terpstra, 2013), updates about location, uploading and circulating photos and videos and voice recordings. Typically WhatsApp verification may include a three stage procedure which involves (i) logging on to the download page of WhatsApp @ https://www.whatsapp.com/download/ and clicking on the chosen device icon and start downloading; (ii) the server then sends a 4-digit PIN number by SMS to the prospective user’s phone by SMS for verification and authentication (Schrittwieser, et. al., 2014), (iii) the user copies the code to the WhatsApp’s application graphical user interface (GUI) and after cross checking by the WhatsApp server the app gets activated on the phone of the user (Schrittwieser, et. al., 2014). Once connected with WhatsApp, the user can get the information about other WhatsApp users by simply checking his/her phone address book or call log history or Gmail address book. This is because WhatsApp may access the user’s contact list or address book to keep track of other mobile phone numbers who use the WhatsApp services and may store this information on the server (WhatsApp, 2014, see sub- para B in Para 3) to get people connected instantly, profile pictures of other users and one WhatsApp user may get instantly connected to others through the server.[4]
Ø How do users create network on WhatsApp and how the groups may be busted?
After downloading the app and activating the same, the user may get connected to his friends or like minded people by doing a simple search in his phone address book. Other numbers with WhatsApp applications may show up. Users may choose to circulate their messages in several ways through WhatsApp :
ü By using broadcasting feature whereby a single text/audio visual  message may be conveyed to a batch of people : The Boradcasting list may be created as below:
Image source : WhatsApp
ü By forwarding the message to maximum five recipients at one time. Now, this “forwarding” may reach a wider recipient list if it is done in a group.  WhatsApp group can be created  by any individual by going to the chat tab and creating a new group. 
Image source: WhatsApp
Interestingly, WhatsApp groups can be private or be public as well. Most of the groups who circulate images /contents of sexual abuse including  for self-gratification or group gratification, may keep their group private so that the group may not be disturbed by any 3rd party monitoring authority including the police. These group members generally may have a mutual understanding and trust whereby the contents shared by them would not be reported outside.  The members may necessarily download /save the sexual abuse/harassment videos/contents in their own devices  for individual gratification or for unethical gaining by further circulation as well. The end to end encryption by WhatsApp may make it more favorable for such group members to widely discuss and circulate such contents.
Public groups on the other hand are more open groups where people may join for discussions and it may not necessarily private for those whom the admin/s have invited or made them join. Unlike the private groups, public groups may be monitored if any  third party monitoring authority joins the discussion in disguise or any other group member decides to bring in the police or other monitoring stakeholders. In both these cases, admin’s responsibilities have been scrutinized by courts in India. The recent report suggests that the courts have held  responsible for allowing to spread seditious, inciting messages.[5]  WhatsApp group members and admins have also been booked for creating /circulating child sexual abuse materials for sexual gratification.[6]
Ø What if the group admin is an underage user?
It is important to know the age barrier about WhatsApp users. There are infact not two, but three options given by WhatsApp. Lets check it:
1.     The minimum age criterion for European region including European Union countries is 16.
2.     For other countries the, the minimum age criteria is 13 unless the domestic laws of the said countries have fixed a higher age for using of WhatsApp.[7]
3.     Overlooking both, a child can use the WhatsApp services of the parents if the parent allows the child to use the services under his/her monitoring.
This in fact shows that a child may use WhatsApp, may create his/her own profile and may create contents him/herself for private or public sharing on WhatsApp with whoever he/she wants. 
Ø What happens to the producer/distributor of the offensive contents?
In broader understanding, the child is legally permitted to create content  which he/she thinks can be circulated. Now, this has been a question for several courts : when a child is creating a sexting content and circulating the same with fellow children (including his/her boy/girl friend ),  how the courts (and the laws )would treat him/her ? Is he the perpetrator? Is he the victim? Or is he a ‘child’ with no liabilities?[8] S.67B of the Information technology Act, 2000(amended in 2008), Ss. 13 and 14 of the Protection of children from sexual offences Act, 2012 clearly mention that “whoever’ creates, circulates, produces etc  contents depicting children in sexually contents may be penalized. These cane be considered non-baliable, which would suggest that the punishment can be heavier.  Similarly, Ss. 67 and 67A of the Information Technology Act, 2000(amended in 2008) also penalizes ‘anyone’ who creates, distributes etc  sexually explicit and obscene materials. S. 354C of the Indian Penal Code also touches upon penalizing men who  private images of woman who would not consent for sharing such contents with third parties . S.375 and 376 of the Indian penal Code also touches upon capturing rape videos and storing or circulating the same. These offences can also be non-bailable and can have heavier punishments.
The contents that the children would have created also carries significance: if a child creates a sexting video or sexual abuse video or a non consensual porn image/content or  even a revenge porn content and sends it to his friend/s, the recipient may decide not to receive the content if from the look at the content or the text attached with it, the recipient feels that it should not be opened or should not be further circulated because it contains ‘bad stuff’.  WhatsApp is smart enough to have created limited policy guideline and security feature whereby one can report his/her child who may be using WhatsApp without parental guidance  and the parents feel that the child may be doing /victimized due to illegal /risky contents and connections.  It says
“If your underage child created a WhatsApp account, you can show them how to delete their account. You can learn how to delete an account in our Help Center.If you'd like to report an account belonging to someone underage, please send us an email. In your email, please provide the following documentation and redact or hide any unrelated personal information:
Proof of ownership of the WhatsApp number (e.g., copy of government-issued identification card and phone bill with the same name)
Proof of parental authority (e.g., copy of birth or adoption certificate for the underage child)
Proof of child's date of birth (e.g., copy of birth or adoption certificate for the underage child)
We'll promptly disable the WhatsApp account if it's reasonably verifiable that the account belongs to your underage child. You won't receive confirmation of this action. Our ability to review and take appropriate action on a report significantly improves with the completeness of the information requested above.[9]
Removal /deactivating of the said account is however at the discretion of WhatsApp especially when they would not be reasonably convinced .
But in case the reporting individual is not the parent of the child who may be doing illegal stuff  or who may be a potential victim, WhatsApp suggests to contact the parents of the child.
For adult wrong doers, WhatsApp has a typical formula which is followed by almost all social media companies : they would suggest to block the number so that the user of that particular number would not be able to contact the blocker  unless the earlier is being unblocked . Here is what WhatsApp suggests regarding how to block a number:
Image source: WhatsApp
Ø The producer/distributor of the offensive content has been arrested. What about the offensive image?
The above information would not serve much purpose for blocking /reporting of the content unless the same is considered as an offending  subject through a police report. In such case, the said content may be made disabled from their own server, but they would rather work like email or SMS and would not access individual devices to dig out the offensive content to block and disable it. In such case, even if the persons (owning the WhatsApp numbers and profiles) may be blocked, the contents may keep on circulating unless these have been ‘ordered ‘ to be disabled from the server.  This is how the objectionable contents float from one device to another and reach out to millions after the original sender may have deleted from his device to save himself or he may have been arrested by the police.
Nothing but a police report or a court order about the said content therefore could be the best answer for blocking the content from being further circulated. But a few things can not be ignored when this is suggested: the police must act accordingly to make WhatsApp delete the content from its server and block the circulation whenever it appears on WhatsApp from which ever device. But this may become a herculean task especially when the police and the courts  may feel  challenged due to lack of infrastructure and proper laws. As long as this does not take place, WhatsApp users have to be responsible enough to not to circulate such contents even if they receive it from known or unknown numbers. Not to be forgotten, the police may arrest individuals who may store child sexual harassment videos /images unknowingly as well. But the unfortunate fact is this may not be the same for adult sexual abuse cases. But if the users use WhatsApp responsibly, the problem may definitely be address.

Please note : Do not violate copyright of this blog. If you would like to use information provided in this blog for your own assignment/writeup/project/blog/article, please cite it as “Halder D. (2019), " WhatsApp reporting of women and child abuse videos:  The common understanding vs the reality”  29th April, 2019 , published in http://debaraticyberspace.blogspot.com




[1] Cuthbertson Anthony (2019). WHATSAPP IS HOTBED FOR CHILD SEX ABUSE VIDEOS IN INDIA, STUDY FINDS. Published in https://www.independent.co.uk/life-style/gadgets-and-tech/news/whatsapp-child-sex-abuse-videos-groups-india-a8885811.html?fbclid=IwAR251ajPe20Y7zcXtD2o1s0w--86-Pr5UrKHVgv7IF_7swAH_dvEGQTzcZQ on 26th April, 2019. Retrieved on 26th April, 2019
[2] Halder, D., & Jaishankar, K. (2015). Harassment via WhatsApp in Urban
and Rural India: A Baseline Survey Report (2015). Tirunelveli, India:
Centre for Cyber Victim Counselling. Available @ https://www.cybervictims.org/CCVCresearchreport2015.pdf Retrieved on 27.04.2019
[3] Ibid
[4] See pp 2 in ibid
[5] See WhatsApp ‘admin’ spends five months in an Indian jail. Published in https://www.bbc.com/news/technology-44925166 Accessed on 22.04.2019
[6] See Sandhya Nair (2018) WhatsApp group sharing child porn busted, 5 held
[7] For more information see https://faq.whatsapp.com/en/general/26000151/?category=5245250
[8] Halder, D., & Jaishankar. (2013). Revenge Porn by Teens in the United
States and India: A Socio-legal Analysis. International Annals of
Criminology, 51(1-2), 85-111. ISSN: 00034452 (UGC Listed Journal)

[9] See https://faq.whatsapp.com/en/general/26000151/?category=5245250

Tuesday, April 23, 2019

The TikTok ban : Why the ban may fail to prevent online victimization of women

CYBER CRIME AGAINST WOMEN BY DEBARATI HALDER

Image credit: Google 

On 24th April Madras High court would decide on the plea of Bytedance, which owns TikTok regarding the much talked about ban of the app. Tik Tok, , a nongaming app launched in 2019 has given a tough competition in regard to its popularity to all the social media giants because of the unique features  which allows users to create and share short videos with special effects. Teenagers and adults  in India loved the app because unlike other social media platforms including YouTube, TikTok has simple features to upload and publish videos. Unlike PubG however, this did not necessarily have gaming features.
In early April, 2019, the Madurai bench of Madras High court had in an interim order directed the government stakeholders in the State and Centre to ban the video app TikTok as the Public Interest Litigation in this regard emphasized that it encourages pornography and underage users are vulnerable to be exposed to sexually explicit contents, pornography etc, which may not be good for their mental and physical health.[1] Incidentally the Madurai Bench of the Madras High court was the first court in India to take suo motu cognizance in BlueWhale game case and asked the Central government and the social media website, web companies like Google etc to monitor what is being generated and catered to the users through their platform.[2] But in this case, the situation stands on a different platform: consequent to the interim order, Google and Apple removed TikTok app  from their Play Stores.  Resultant, Bytedance had incurred huge loss. But the later has now challenged this interim order on the ground that the interim order was passed on the basis of ex parte hearing. The company had stated that the app allows users to create videos and circulate them for fun and amusement and it does not pose any threat to security of individuals. Bytedance also stated that such bans are against right to speech and expression.[3]
We can see here two important points:
First : before the governments took prohibitory actions (like what happened for PubG ban in Gujarat, where police started arresting those who downloaded and played PubG even after the ban order was conveyed to the public)[4], Web company like Google  and phone and software manufacturing company Apple had followed the mandates of S.79 (exemption of liability of intermediary in certain cases) and Rule 3 of  Information technology (Intermediaries guidelines) Rule, 2011 : specially mentionable are Rules 3(3) and 3(4) which states as follows:
Rule 3(3) states that The intermediary shall not knowingly host or publish any information or shall not initiate the transmission, select the receiver of transmission, and select or modify the information contained in the transmission as specified in sub-rule (2): provided that the following actions by an intermediary shall not amount to hosing, publishing, editing or storing of any such information as specified in sub-rule: (2) — (a) temporary or transient or intermediate storage of information automatically within the computer resource as an intrinsic feature of such computer resource, involving no exercise of any human editorial control, for onward transmission or communication to another computer resource; (b) removal of access to any information, data or communication link by an intermediary after such information, data or communication link comes to the actual knowledge of a person authorised by the intermediary pursuant to any order or direction as per the provisions of the Act;
And Rule 3(4) of the above rule states The intermediary, on whose computer system the information is stored or hosted or published, upon obtaining knowledge by itself or been brought to actual knowledge by an affected person in writing or through email signed with electronic signature about any such information as mentioned in sub-rule (2) above, shall act within thirty six hours and where applicable, work with user or owner of such information to disable such information that is in contravention of sub-rule (2). Further the intermediary shall preserve such information and associated records for at least ninety days for investigation purposes.
These companies apparently did not want to invite any more troubles like the past when they were repeatedly called by the court to explain why they had not taken any action to block and ban contents and materials victimizing children which are regularly shared through their platforms.
Second: Bytedance, the parent company of TikTok has alleged that they were not heard by the court before pronouncing the ban order. Apparently, they may become the first web company to stress upon the point as why they should be banned when they have their flagging system and they do take care of the contents that are flagged. This case would make a history in India where the court has taken a decision influenced by the happenings of the past, and the concerned web company promises to break the glass ceiling because they know this is not the end. While many information as how to use (activate/download) TikTok without Google/Apple Play stores have started surfacing on internet,[5] my concern is not how the app may or may not be downloaded legally or illegally.
Exposing children to pornography, using women as items of sexual gratification, grooming, creating “dangerous contents” which may cause damage to public health, online victimization of women and children etc would not stop if one video creating and sharing app is banned. In that case, the courts must also consider picking up social media giants Facebook, Twitter, YouTube, Instagram etc, and search engines like Google for banning them because of their constant failure to monitor misogynist, sexist, child abusive contents. All social media companies including YouTube have data mined several images, contents and marked them as adult specific. Several videos are not available unless the users verify their age. But how will you search the needle in the hey stack? The courts could not yet make strict regulations for virtual age verification by the web companies. The web companies (hosted in US and other countries) are confused about the law relating to pornography because India does not have any focused law defining pornography still now. Further, the web companies also do not accept all contents (which are alleged to be porn as per Indian understanding) as offensive because the ever expanding free speech and expression jurisprudence of the US does not allow the web companies to take down the contents unless it is gravely threatening to the physical and virtual privacy  and security of the person concerned or damages the reputation of the woman (in case the victim is a woman). Children can still be exposed to online dangers through Facebook, Instagram or YouTube. Women are continued to be victimized through all pockets of internet.
As such, there may be practically no solution for this and ban would encourage more law breaking. Google and Apple had already shown that they are willing to follow the local laws (or rather, not to fall in any legal tangles regarding web service providers liability). It is expected that India creates focused laws to address different emerging and existing types of online victimization and the same are implemented in proper way. Otherwise, the orders of banning may lead to ground ZERO.

Please note : Do not violate copyright of this blog. If you would like to use information provided in this blog for your own assignment/writeup/project/blog/article, please cite it as “Halder D. (2019), "The TikTok ban : Why it may fail to prevent online victimization of women”  23rd April, 2019 , published in http://debaraticyberspace.blogspot.com



[1] For more, see J.Sam Daniel (2019). Ban TikTok, Its encouraging pornography : Madras High court to Centre. Published in NDTV on April 4, 2019. URL https://www.ndtv.com/india-news/madras-high-court-directs-centre-to-prohibit-downloading-of-tik-tok-app-2017482 Accessed on 12.04.2019
[2] Halder, D.(2018) The #Bluewhale challenge to the Indian judiciary: A
critical analysis of the response of the Indian higher judiciary to risky
online contents with special reference to Bluewhale Suicide game. In
Sourdin Tania & Zariski Archie (eds.), The responsive judges. USA:Springer  ISBN no. 978-981-13-1022-5  pp 259-276.
[3] See  Live law news network (2019). TikTok Ban : SC Says Ban Will Stand Lifted If Madras HC Fails To Decide On Interim Order By April 24. Available @https://www.livelaw.in/top-stories/tiktok-ban-sc-says-ban-will-stand-lifted-if-madras-hc-fails-to-decide-on-interim-order-by-april-24-144438 . Publshed in on 22nd April, 2019.  Accessed on 23rd April, 2019
[4] See Ahaskar Abhijit (2019). Why playing PUBG Mobile can get you arrested in Gujarat. Published in https://www.livemint.com/news/india/why-playing-pubg-mobile-can-get-you-arrested-in-gujarat-1552849965539.html on 18th March, 2019. Accessed on 12.04.2019
[5] For example, see SC hearing on TikTok: Why it is difficult to ban the app in India. Published in https://www.businesstoday.in/technology/internet/tiktok-ban-after-madras-hc-decision-reality-banned-apps-tiktok-pubg/story/339286.html   on April 22, 2019. Accessed on 22.04.2019