REVIEW OF INTELLECTUAL PROPERTY, TECHNOLOGY, MEDIA, TELECOM, PRIVACY AND COMPETITION LAWS

Search Results
69 results found with an empty search
- News Corner: Brazil’s Push-back against Apple’s “iPhone” Trademark
The word “iPhone” has come to be predominantly associated with the smartphone giant Apple Inc. Which is why it is interesting to note that the Supreme Court of Brazil has accepted an appeal filed by IGB Eletrônica to assess whether Apple can claim exclusive trademark rights over “iPhone”, which IGB contends is a generic and descriptive mark for a smartphone. IGB, which carries on its business under the name Gradiente, also contends that it has prior rights over the mark “iPhone” by way of its trademark application for ‘G GRADIENTE IPHONE’ under No. 822112175, filed on March 29, 2000 and registered on January 2, 2008. IGB has given Apple a run for its money when it came to trademark registrations in Brazil. As reported in 2013, Apple faced several problems, including refusal for its trademark registration, due to IGB’s prior trademark. Apple challenged this before the Brazil’s National Institute of Industrial Property and contended that IGB has not sold any products under the “iPhone” mark whereas Apple has global goodwill in the same. In this battle, IGB lost exclusive rights over the “iPhone” mark but has now brought an appeal before the Supreme Court. Time will decide what matters more – prior rights or goodwill?
- Teaming ‘Teams’ with Microsoft Office: Slack Challenges Microsoft’s Anticompetitive Tying Practices
Posted on August 1, 2020 Authored by Tanya Garg Image Source: itbuzznews.com Introduction On 22 July 2020, the business-messaging app, Slack Technologies Inc. filed an antitrust complaint (“Complaint”) against Microsoft Corporation with the European Commission (“Commission”), European Union’s (“EU”) apex competition watchdog. “Slack” is a proprietary business communication platform developed by Slack Technologies. Under the European rules, the Complaint is not made public. However, in a press release, Jonathan Prince, Vice President of Communications and Policy at Slack said, “We’re confident that we win on the merits of our product, but we can’t ignore illegal behaviour that deprives customers of access to the tools and solutions they want. Slack threatens Microsoft’s hold on business email, the cornerstone of Office, which means Slack threatens Microsoft’s lock on enterprise software.” Slack’s representatives said it has prayed for an to remove Miscrosoft’s application “Teams” from its “MS Office” bundle, making it a stand-alone product, competing on merits. While the specific grounds for the Complaint were not made public, the said press release clarified “The complaint details Microsoft’s illegal and anti-competitive practice of abusing its market dominance to extinguish competition in breach of European Union competition law. Microsoft has illegally tied its Teams product into its market-dominant Office productivity suite, force installing it for millions, blocking its removal, and hiding the true cost to enterprise customers”. The Complaint threatens Microsoft’s recent ability to avoid regulatory scrutiny and steer clear of any investigations by the Commission. While Google, Facebook, Amazon and Apple faced the blunt of investigations, Microsoft has dodged antitrust scrutiny so far. This Complaint could not have come at a more ironic time for Microsoft. It was reported earlier this week that Microsoft’s President Brad Smith had recently met with the US House antitrust sub-committee to discuss topics such as Apple’s arbitrary App Store approval processes. Allegations Microsoft’s size, revenue and stock value indisputably makes it a part of the crème de la crème of the Big Tech industry. Slack alleged that the software giant has abused its dominance by using its market power to try to crush its newcomer rival. Microsoft is accused of trying to eliminate competition by tying its rival app “Teams” with its extensively used Office 365 tools. This move meant millions of users were forced to install the app without the ability to remove it. Slack runs a service that competes directly with “Teams”. Slack contended that this bundling tactic, is part of a pattern of anticompetitive behaviour displayed by Microsoft. The allegations against the software giant of abusing its dominance and illegal tying and bundling tactics, reiterates its competition battles[1] from over a decade ago. After reviewing the Complaint, it is up to the Commission whether a formal investigation into Microsoft’s alleged anti-competitive practices is warranted. As seen in the last few years, the European regulator has been aggressively pursuing antitrust actions against large tech companies. For months, Slack argued that it is more akin to the software application “Zoom” and is not in competition with Microsoft’s “Teams”. However, it has contradicted the same statement in its SEC 10-Q filing last year, stating, “Our primary competitor is currently Microsoft Corporation”. The former, finally admitted what was clear all along: Microsoft Teams is a competitor, and Slack is finding it hard to keep up. Tying and Bundling Article 102 of the Treaty on the Functioning of the European Union (“TFEU”) prohibits abusive conduct by companies that have a dominant position in a particular market or industry. The Commission has clarified that being in a dominant position is not illegal; abusing that dominance to distort competition is. Article 102(d) of the TFEU deals with abuse of dominant position by tying and bundling. When the conclusion of a contract is subject to of supplementary conditions, by the other party, it will be considered as abuse by means of tying and bundling. Tying is when a product or service is sold on the condition that along with one product, the buyer buys the other one, also known as the tied product, from only that seller itself. In 2019, the Competition Commission of India (“CCI”) held, the signing of Mobile Application Distribution Agreement that mandated pre-installation all of GSuite apps[2], thereby, compulsorily tying it with the other applications. The CCI noted that by doing so, Google disincentivised the device manufacturers to develop viable alternatives, thereby restricting technical development, which is prejudicial to consumers. Thus, being violative of Section 4(2) of the Competition Act, 2002. Bundling the term used to describe selling a collection of goods as a set. In pure bundling, individual goods are only sold in the agreed combination, making it is essentially equivalent to tying.[3] In mixed bundling, goods are available individually and as a package. There are certain conditions that are analysed and required to be fulfilled to establish whether tying and bundling has caused exclusionary effects: There is pre-existing dominance; The products or services that are tied or bundled are distinct; The practice is likely to have an anticompetitive effect or foreclosure effect, causing demand to shift away from competitors; and There is no efficiency or objective justification tying or bundling in question. Microsoft’s Earlier Anticompetitive Practices In United States of America v. Microsoft[4], in 1992, the Federal Trade Commission began an inquiry over whether Microsoft was abusing its dominance on the PC Operating Systems market. Subsequently, Microsoft agreed not to tie other Microsoft products to the sale of Windows in 1994. However, it remained free to integrate additional features into the operating system. The trial commenced on May 18, 1998, with the US Department of Justice (“DoJ”) and the Attorneys General of twenty US states, suing Microsoft for illegally obstructing competition in order to protect its monopoly in the software industry.[5] Additionally, the DoJ sued Microsoft for violating the 1994 decree, by compelling computer manufacturers to incorporate its Internet browser as a part of the installation of Windows software. The Court found that Microsoft’s dominance in the computer operating systems market constituted a monopoly, and active steps were taken to crush threats to that monopoly like Apple, Java and others. It was concluded that Microsoft had committed monopolization, attempted monopolization, and tying in violation of Sections 1 and 2 of the Sherman Antitrust Act. In 2001, the DoJ reached an agreement with Microsoft to settle the case. The settlement mandated Microsoft to share its application programming interfaces with third-party companies. Additionally, a three-person panel was to be set up in order to ensure compliance with the terms of the settlement. This panel would have full access to Microsoft’s systems, records, and source code for five years. This settlement was unanimously upheld and approved by the U.S. Appeals Court. In Microsoft v. Commission of European Communities[6], Sun Microsystems voiced a concern about the lack of disclosure of some Windows NT interfaces, in 1998. The case amplified when the European Commission of the EU examined Microsoft’s abuse of dominance in the market. The case brought before the Commission, by the EU, against Microsoft for abuse of its dominant position in the market. The complaint resulted in the EU ordering Microsoft to disclose information concerning its server products and release a Microsoft Windows version without Windows Media Player (“WMP”). The Court, while assessing the allegations under Article 102 of the TFEU, observed that the products in question were separate, distinct and would have independent demand if not tied together. It was also observed that this tying practice left the consumers with no choice to use the Microsoft Windows Operating System without the WMP. Moreover, the Court found that Microsoft did not have any objective reasons for tying these products (such as proving that the WMP contributes to the efficiency of Microsoft’s Operating System or any other reason justifying the need for tying the said products). The Court found that this led to foreclosure of competition for several other independent media players. Moreover, the Court remarked that Microsoft held a significant market share and a dominant position in the market for Operating Systems. Thus, by making it mandatory to purchase WMP along with its Operating System, Microsoft was abusing its dominant position. The Court specifically stated, “Microsoft’s competitors are a priori at a disadvantage even if their products are inherently better than Windows Media Player”. The decision was reached citing the continuing abuse by Microsoft. It was ordered that Microsoft offer: (a) a version of Windows without Windows Media Player; and (b) Divulge necessary information for competing networking software to interact entirely with Windows desktops and servers. A penalty to the tune of €497 million, the largest fine ever handed out by the Commission at the time, was imposed. While Slack and Microsoft may go head to head soon, Google may be the only one benefitting. Google has already shown indications of catching up with platforms such as “Zoom”, Microsoft’s “Teams”, and Slack. It is worth mentioning that recently Google announced that it was more tightly integrating video, chat and email into its “GSuite” bundle of products. A similar video conferencing software by Google by the name of “Google Meet” was made free earlier in an attempt to compete with Zoom’s sudden popularity due to COVID-19. Since then, it has been taking proactive steps to integrate its interactive software programmes such as “Google Chat”, “Rooms” and “Meet” along with its other applications such as Gmail and Google Calendar. Thus, if Slack’s Complaint convinces the Commission to open an investigation and take action against Microsoft’s tying practices, it still faces the impending threat of Google bundling its own apps and services in a similar way. However, according to Slack, Google is not a dominant company in business software; therefore, not included in the Complaint. If Google is able to produce a formidable competitor, then Slack will face far bigger problems than Microsoft. Google evidently dominates consumer usage of email, search, and with services like YouTube. Provided that the present complaint against Microsoft is successful, it might open doors for potential anti-competitive issues for a software giant such as Google concerning its bundling practices. Conclusion Platforms such as “Slack”, “Teams” and “Zoom” have experienced a massive increase in demand due to the COVID-19 induced lockdowns that have made work from home mandatory. There will a lot of attention around this case, if the Commission decides to pursue the Complaint. Slack hopes that “Teams” is separated from the “Office 365 Suite” bundle and sold separately with a fair commercial price tag. The author opines that in light of the previous antitrust complaints against Microsoft for its bundling practices, Microsoft’s dominant position in the PC Operating Systems market has been sufficiently established. However, the present case concerns not tying of “Teams” with the Operating System but with its “MS Office” bundle. Therefore, whether Slack’s complaint has any merits or not, only time will tell. [1] United States v. Microsoft Corp., 253 F.3d 34 (D.C. Cir. 2001), also see Case COMP/C-3/37.792 [2] Umar Javeed & Ors. v. Google LLC & Ors., Case No. 39 of 2018, also see Eastman Kodak Co. v. Image Technical Services, Inc., 504 U.S. 451 (1992) [3] Nicholas Economides, ‘Bundling and Tying’ The Palgrave Encyclopaedia of Strategic Management http://neconomides.stern.nyu.edu/networks/Economides_Bundling_and_Tying.pdf [4] United States, Supra 1 [5] Nicholas Economides, ‘The Microsoft Antitrust Case’, Journal of Industry, Competition and Trade: From Theory to Policy (August, 2001) http://neconomides.stern.nyu.edu/networks/Microsoft_Antitrust.pdf [6] Microsoft v. Commission of European Communities, Case T-201/04 (17 September 2007)
- Trademarking a Single Alphabet – Are Graphic Effects Distinctive Enough?
Originally posted on July 29, 2020 Authored by Tanya Varshney One of the fundamental rules of trademark law across jurisdictions is that a trademark must be a distinctive. In India, a mark being “devoid of any distinctive character” is an absolute ground of refusal for a trademark application[1]. However, often popular brands are recognizable on the basis of marks used by them consisting of single alphabets or symbols depicted in a stylized matter. For instance, the following marks have come to be widely associated with their respective brands: These brands are undeniably globally renowned and most of us would be able to instantly associate the above-mentioned single letter marks with their respective brands. Thus, even though these marks are technically claiming exclusive rights in a stylized depiction of an alphabet (which would be inevitably used by people), because these marks are ‘well-known’, they have acquired distinctiveness. The proviso to Section 9(1) of the Trademarks Act, 1999, in this regard, also acknowledges that “a trade mark shall not be refused registration if before the date of application for registration it has acquired a distinctive character as a result of the use made of it or is a well-known trade mark”. Interestingly, a search on the WIPO Global Brand Database revealed that the above brands have actually applied for and successfully secured registrations for their single letter trademarks. For instance, Facebook's "F" marks have been registered by Facebook Inc. in many jurisdictions. In standard word descriptions of these device marks, ultimately the sole alphabets are provided (for instance, with Facebook’s marks, the mark description is ‘f’). This raised some pertinent questions – (a) whether ordinary brands (that have not acquired a well-known status for their marks) would be able to secure trademark registrations if their mark is a stylized depiction of an alphabet, and (b) whether these above-mentioned brands would be able to oppose or litigate against other marks which are stylized depiction of their respective alphabets/letters and whether the standard for comparison would be higher and stricter? This article aims to analyze these questions with the help of judicial precedents across the world. Acronym Trademarks Jurisprudence with respect to validity and enforcement of acronym trademarks is slightly more accessible than jurisprudence with respect to single letter trademarks. However, the reasoning provided by the courts in such cases provides useful insight into the crux of this article. In Tube Investments of India v. Trade Industries, Rajasthan[2], the Supreme Court sought to decide whether it was a case of trademark infringement between the two marks: [3] The Supreme Court ascertained that the rival marks have significant visual similarities and the nature of goods sold under the rival marks are similar. The appellant’s mark was registered in 1972 whereas the respondent filed its application in 1984 but withdrew the same later due to an opposition proceeding by the appellant. In view of this, the court found that the appellant’s mark had superior rights. In G.M. Modular v. TM Marketing[4], the court had to assess the trademark dispute between the “GM” device mark and “TM” device mark. The Plaintiff filed a suit for passing off of the trademark and copyright infringement suit. The Defendant contended that the “TM” are ordinary letters which are insufficient to amount to copyright infringement and that the Plaintiff cannot claim exclusivity over the font. The essential features of the “GM” device mark and “TM” device mark were stated to be “G Magic” and “Touch Me” respectively. It was further contended that the test for comparison of marks consisting or ordinary letters of alphabets ought to be different from the test applicable to work marks having phonetic similarity. The court observed that if the rival marks are identical, there would not be any need to inquire into deception or confusion whereas if the marks are deceptively similar, it would be important to assess the likelihood of confusion. In this background, the court noted that there are visual similarities between the rival marks including the colour of the background, similarity of letters “GM” and “TM”, and manner and style of the rival marks. Thus, the court passed a favourable order for the Plaintiff. More recently, a trademark infringement suit was filed by the clothing brand H&M against HM Megabrands.[5] In defense, the defendant argued that “no person can claim exclusive rights to the acronym / two letter mark ‘HM’ or ‘H & M’ or that the alphabets ‘H’ and ‘M’ do not have any trade mark significance; that the mark ‘H&M’ being a two letter mark is not registrable under the Trade Marks Act, 1999 and is inherently not distinctive or capable of distinguishing the goods / services of the plaintiffs from those of others”. Interestingly, the court observed that ‘H&M’ or ‘HM’ are not generic or publici juris to the trade or business of garment manufacturing. In particular, the court stated “a word, even if generic, if applied to a business with which the word is unrelated, is indeed to be protected. Merely because it is alphabets or acronym, is immaterial”. It does beg the question – what alphabets would be related to the garment business, perhaps ‘c’ for clothing, ‘g’ for garment or ‘f’ for fashion? This seems rather peculiar as, more often than not, single alphabets would be unrelated to any business. Letter as a Concept – CJEU Chimes In In Kiosked v. European Union Intellectual Property Office[6], the question of trademark infringement with respect to two figurative ‘K’ marks was raised before the Court of Justice of the European Union (“CJEU”). The details of the rival marks are as follows: The Applicant had applied for an international trademark registration which was successfully opposed by the Intervener on the basis of its earlier ‘K’ marks. The Applicant then appealed to the EUIPO. The EUIPO also upheld the opposition order on the ground that there was a “lower than average” degree of visual similarity and that neither of the marks had acquired inherent distinctiveness, thus, there would be chances of confusion and deception between the two. Ultimately, superior rights were accorded to the earlier trademark. The common elements also included the figurative depiction of ‘K’ in white over a black background. The CJEU, while examining the rival marks, reiterated the principle that assessment of likelihood of confusion must be based on the overall impression given by the marks and giving due weight to the distinctive and dominant elements. The CJEU disagreed with the EUIPO that since the disputed marks are purely figurative, the degree of comparison may be “lower than average”. The court found that the common element between the two was essentially ‘K’ along with the colour scheme. However, there were also differences such as the vertical and diagonal lines as well as the rounded edges versus sharp edges. Additionally, the court found that the Applicant’s mark’s black background resembled a “speech bubble” while the black background in the Intervener’s mark did “not have a specific geometric form”. The court found these differences to be significant enough to conclude that the rival marks are not identical and that the standard of degree of similarity “must be atleast average”. Although, phonetically, the court found the marks to be identical as both of them are a figurative representation of ‘K’. Interestingly, the court, on the point of conceptual comparison, stated “as regards the conceptual comparison of two marks which consist of the same single letter, it must be stated that the graphic representation of a letter is likely to evoke a very distinct entity in the mind of the relevant public, namely a particular phoneme. In that sense, a letter refers to a concept”. Boldly stating that a single alphabet could be equivalent to a concept, the court found the rival marks to be conceptually identical. It was, however, argued by the Applicant that the rival mark lacks distinctiveness as it consists of a “very ordinary and common sign, namely a capital letter ‘K’” and that such a single alphabet is an “inherently weakly distinctive element”. However, finding identical status in two out of the three points of comparison coupled with the similarity in the nature of services offered under the rival marks, the CJEU upheld that there would be likelihood of confusion. It is pertinent to note that the court sought to expressly refuse to give any ruling concerning in which circumstances a letter of alphabet must be regarded as having inherent distinctiveness. Conclusion It is evident that several brands use either single alphabets or acronyms of their trade/corporate name as their brand logos. However, it would be rather extreme to suggest that someone could claim monopoly over a letter with respect to certain goods and services. For instance, if a search engine were to use a stylized version of the letter ‘G’ as their logo, chances are that Google would be able to successfully oppose it. Thus, the idea of a ‘letter as a concept’ and assessing conceptual similarity on the basis of using a common letter seems somewhat absurd. On the other hand, refusing registration of trademarks which are stylized/figurative versions of alphabets, symbols, letters, etc. also seems absurd. Thus, it is the author’s opinion that the standard of comparison with respect to such marks should be stricter. Some factors that should be given importance are the nature and similarity of goods and services; the colour scheme; the graphic effects surrounding the single alphabet/symbol/letter; and the popularity of the brand or trade mark. However, the comparison in such cases should not be made on the basis of phonetic similarity or conceptual similarity because that would allow the earlier trade mark owner to assert a monopoly over a letter in essence. The author also opines that validity of a trademark application seeking to secure registration of a single letter trademark ought to depend on (a) existence of any unique dominant elements aside from the alphabet, (b) distinctiveness of the graphic effects added, (c) presence of any distinguishing elements such as colour scheme, font size, background design, etc., and (d) the popularity of the brand. Otherwise, allowing registrations of single letter trademarks which do not have any unique elements but basic graphic effects might lead to such applicant claiming monopoly over the said letter in their subject class of goods/services. To conclude, it is argued that the courts need to pick up where the CJEU left off – the burning question of “in which circumstances a letter of alphabet must be regarded as having inherent distinctiveness?”. [1] Section 9(1)(a), Trademarks Act 1999. [2] (1997) 6 SCC 35 [3] Ibid, paragraph 2. [4] 2007 (35) PTC 406 (Del.) [5]H&M Hennes & Mauritz Ab & Anr v. Hm Megabrands Pvt. Ltd. & Ors, 2018 SCC OnLine Del 9369 [6] Case T-824/16, Judgment of the General Court (Fifth Chamber) of 13 March 2018.
- Is India Equipped to Defend Privacy in the Era of Artificial Intelligence?
Originally posted on July 28, 2020 Authored by Aarya Pachisia Image Source: REUTERS/Damir Sagolj – RC1838EC3EA0/brookings.edu Introduction: Need for Protection of Privacy Privacy violations have become inevitable in the age of Artificial Intelligence (“AI”). Cambridge Analytica is a classic example of how non-personal data of citizens can be used to predict sensitive personal information about them, with the use of one single algorithm[1]. AI software, today, is sophisticated enough to compute and predict the behaviour of an individual and chart possible patterns through data already provided to them; thereby, blurring the lines between personal and non-personal data[2]. Such a grave invasion of privacy calls for a robust legal framework to provide a check on the rapidly advancing sphere of AI. On 19 February 2020, the European Commission (“Commission”) issued a White Paper on AI with proposals for regulating the same. Subsequently, the Commission, invited comments from stakeholders to the same. The Commission recognised the complexity and the inability of the present framework of European Union’s General Data Protection Regulation (“GDPR”), a legislation that protects the privacy of its citizens, to cope with the constantly changing dynamics of AI. Therefore, it becomes imperative for lawmakers to design a legislative framework which regulates AI and ensures its compliance with the European standard of maintaining privacy. In India, the right to privacy has received due recognition recently in the landmark judgment by the Hon’ble Supreme Court in K.S. Puttaswamy v Union of India[3], where a nine-judge bench unanimously held it to be a fundamental right under Article 21 of the Constitution of India. The rights and obligations with respect to data privacy are codified in the Personal Data Protection Bill, 2019 (“Bill”). The Bill is currently referred to the Joint Parliamentary Standing Committee, headed by Ms. Meenakshi Lekhi, for further suggestions. The Bill finds its roots in the GDPR. The aim is to establish mechanisms for protection of personal data and proposes the setting up of a Data Protection Authority of India for the same. The aim of this article is to assess whether the Bill is equipped to deal with privacy violations in the face of lightning advancements in the field of AI. Although, there are various questions pertaining to the crossroads, this article shall specifically analyse parts of the Bill and focusing on the following key questions: Whether processing of data with respect to AI is envisaged within the scope of the Bill? What are the principles governing the processing of data with respect to AI in the Bill? Data Processing under the Bill Section 2 of the Bill addresses the applicability of this Bill to certain categories of data processing. Data has been divided into two categories: (a) data that can be traced back to an individual, i.e., personal data; and (b) anonymized data. Anonymization is the process of removing personal identifiers from data, thereby preventing said information from being traced back to a particular individual. The Bill defines the process of anonymization as “in relation to personal data, means such irreversible process of transforming or converting personal data to a form in which a data principal cannot be identified”[4]. Personal data is defined under the Bill as: “data about or relating to a natural person who is directly or indirectly identifiable, having regard to any characteristic, trait, attribute or any other feature of the identity of such natural person, whether online or offline, or any combination of such features with any other information, and shall include any inference drawn from such data for the purpose of profiling”[5] . Thus, personal data must be identifiable to a natural person. AI algorithms ideally process data in two stages: (i) algorithmic, and (ii) actual usage of data to predict behaviour. In the algorithmic stage, the user trains the algorithm by providing necessary data. In the following stage, the software then predicts the behavior from the data with which it was trained in the earlier stage. Two glaring problems present themselves regarding the scope of the definition provided under Section 3 of the Bill. First, the provision does not expressly include the processing of data during the algorithmic stage. The EU bodies had to provide an express clarification regarding the same. Data provided to such algorithms can be discriminatory in nature, and therefore, it becomes imperative to include processing of data at this stage within the scope of applicability of the Bill irrespective of anonymity of such data. No express clarification has yet been provided by the Indian authorities on this subject. Since, the legislation is not yet notified, there stands a good chance of either the government clarifying the same or the courts reading the algorithmic stage within the scope of Section 2 of the Bill. Second, Section 2(B) of the Bill only includes anonymized data within its scope, if it is being processed by the government after being taken from data fiduciaries. This is highly problematic at two levels. First, recent research[6] has shown that anonymized data can be deanonymized easily, enabling such data to be traced back to its principal. Second, it only envisages processing by the government and not by private parties, thereby creating a grey area where corporations can commit possible privacy breaches without any consequences. The AI giants in the private sector will not be held liable for processing anonymized data as it is not included within the scope of the Bill. They might use it to train algorithms or deanonymization, it shall not have any consequence as processing of anonymized data if not by the government does not fall within the scope of the Bill. Moreover, if private entities provide AI algorithms with anonymised data, then such processing would fall out of the scope of the Bill’s applicability. Therefore, it becomes crucial to include processing of data by private entities irrespective of the nature of data provided to the AI algorithm. When the Parliament decides to enact a legislation dealing specifically with AI the same provisions have to be incorporated in that legislation as well. In order to make the following arguments valid, I shall be assuming that the above-mentioned recommendation have been added to the Bill and then, analyse if there is any further lacuna that needs to be filled to protect privacy in the era of AI. Provisions Governing Data Processing by AI In this article, I shall specifically analyse the provisions dealing with the following principles: purpose limitation; storage limitation; accuracy principle; and the right to be forgotten. The Bill envisages the principles within its scope and states that the central government can exempt any government agency from the Bill and the Right to Be Forgotten. Such a provision did not exist in the 2018 draft Bill. Purpose Limitation Purpose limitation refers to the act of processing data only for the purposes that has been consented to by the data principal. The Bill ensures the same under Sections 5, 6 and 7 that consent and free will are important elements to allow for processing of data by the data processor or the fiduciary. The data fiduciary is under an obligation to provide a notice specifying the purpose, grievance redressal mechanism, nature and category of data and every other possible relevant information connected to the processing of personal data. Thus, the principal shall be informed about who has access to their data and for what purpose their data will be used. Another important aspect of consent is that it should be as easily withdrawn as it was given. This Bill dilutes the consent framework and attaches a pre-condition to such withdrawal thereby depriving the principal of free will. It states that the principal should have a ‘valid’ reason to withdraw consent but who determines the validity of said reason has not been mentioned If the consent is withdrawn without a valid reason, then all legal consequences for the effects of such withdrawal shall be borne by the data principal. Therefore, diluting right to privacy. This problem is not specific to AI only but is an overall critique of the consent framework provided in the Bill. Storage Limitation The storage limitation principle provides that once the data has served its purpose, it should be deleted. Under Section 9 of the Bill, the data fiduciary is obligated to do the same. Jurisdictions globally have adopted this principle along with very few exceptions in order to secure the privacy of its citizens. For instance, in Canada, the data that has served its purpose, ought to be destroyed, anonymised or erased. There are similar provisions that can be identified in South Africa, Australia and United Kingdoms. Accuracy Principle Accuracy principle is extremely essential to AI. The necessity to have accurate data is envisaged under Section 8. The data that shall be used to train the algorithm needs to be accurate for the machine to predict appropriate results. If the data is inaccurate, the result shall be incorrect, thereby harming the efficiency of the AI software. For instance, if the process of giving loans has been automated and the data that has been inserted to train the software is inaccurate. In such a case, a person who is ineligible to receive a loan shall be given one, therefore emphasizing upon the importance of accurate data. Right to be Forgotten Right to be forgotten is an essential principle governing privacy in the 21st century. The principle has been recognized in GDPR as well as the Bill. The principal lays down the right to prevent the disclosure of an individual’s personal data by a fiduciary on the basis of certain grounds, for instance, the purpose for which the data was collected has been fulfilled or he consent for processing data has been withdrawn. The question that arises in this case is, how do we teach a software to forget data that has been used to train it in order to predict a certain pattern? It is necessary to answer this question because the upcoming generation is growing up in the era of AI. Their mistakes, movements, engagements, achievements are constantly being recorded. Their life is constantly being processed. COVID-19 has forced us to transition into a digital way of life. Work, as well as education are now digitised. All communications and classroom lectures are recorded. Since India still does not have a legislation in place, it is still not clear how this wealth of data is currently being processed and the entities it is being shared with. This poses a grave threat to the privacy of every individual. The data of millions of students, professors and employees is constantly being fed to an algorithm. Conclusion We live in a time where each and every one of us will be heavily impacted by AI. The advancement of AI gravely threatens the privacy of an individual. From smart home appliances, to the products we buy on e-commerce platforms, most online platforms are constantly predicting our behaviour and blurring the lines between personal data and non-personal data. The major problem with AI is that even the developers do not understand how an AI arrives at an outcome creating a black box. This creates a lot of mystique surrounding AI. Research on teaching AI to forget is in the preliminary stage and it is one of the most important principles that will enable AI and privacy to co-exist. Therefore, it is vital to ensure and uphold the right to be forgotten. It is necessary, to not only strengthen the Bill, but also to introduce a legislation or an express clarificatory directive which specifically deals with regulation of AI. Aarya Pachisia is a 4th year law student at Jindal Global Law School. She is extremely interested in issues surrounding data privacy and AI. She is currently interning at Indian Society for Artificial Intelligence and Law that motivated her to write this piece. [1] Cambridge Analytica, a British consulting firm, misappropriated Facebook users’ data resulting in a massive data leak also addressed by Facebook executives. Complaints were also filed against Cambridge Analytica before the Federal Trade Commission in the United States. [2] See, Keith Collins and Gabriel J.X. Dance, ‘How Researchers Learned to Use Facebook ‘Likes’ to Sway your Thinking’ New York. Times, Mar. 20, 2018, at B5, available at https://www.nytimes.com/2018/03/20/technology/facebook-cambridge-behavior-model.html [3] (2017) 10 SCC 1 [4] Section 3(2), Personal Data Protection Bill 2019 [5] Section 3(28), Personal Data Protection Bill 2019
- “I AGREE”: Analysis of Click-Wrap Agreements on Social Media Platforms
Posted on July 23, 2020 Authored by Manika Dayal Source: chess-events.org The evident and rapid popularization of social media platforms all over the world arguably functions under the pretext of simplicity. A simple means of connecting communities, cultures, people, traditions, trends and ideologies. In theory the intent behind social media platforms seems almost too idealistic to accomplish. Well it is, because akin to almost everything in the world, with the good comes the fine print. Social media platforms, through the advent of time, have become a multi-billion-dollar industry which inevitably grows by the second. This consequently raises the stakes for all the stakeholders involved in this process (i.e., the investors, operators/owners and the creative sector). Further, original content that binds all the stakeholders, has inevitably become extremely vulnerable to increased intellectual property rights violation. As a result of lack of concrete precedents in this field and complaisant enforcement, the rules, regulations, or ‘Terms of Service’ of various platforms demand a closer look. From the point of view of an average user, User Generated Content Agreements (“UGC Agreements”) are extremely important. There are three aspects of the UGC agreements that are raised for consideration. First, the rights vested in the user when he/she reproduces their original content on the platform. Secondly, the storage of the user generated content. Thirdly, rights vested in the online platform for the purpose of reproducing the user generated content. For the purpose of this article we would be analyzing the Terms of Service in the context of user generated content of three popular social media platforms that are YouTube, Instagram and TikTok. YouTube In YouTube’s Terms of Services, UGC related guidelines can be traced to their ‘Your Content and Conduct’ subheading which elucidates upon the rights vested in the owner and the content creator. It explicitly states that sole ownership rights of the content are vested with the creator. However, when the creator uploads his/her content on the platform by clicking on the click-wrap agreement they license their content to YouTube which grants them: “a worldwide, non-exclusive, royalty-free, sublicensable and transferable license to use that Content (including to reproduce, distribute, prepare derivative works, display and perform it) in connection with the Service and YouTube’s (and its successors’ and Affiliates’) business, including for the purpose of promoting and redistributing part or all of the Service.” This license along with its above-mentioned correlated rights is also extended to other users of their service whilst barring the enjoyment and enforcement of these rights outside of the said service. One such example of affirmative interpretation of click-wrap copyright license of users’ content to social media platforms is in the case of Agence France-Presse (AFP) v. Morel where the United States district judge citing the Terms of Service of Twitter held that “The Twitter TOS (terms of service) provides that users retain their rights to the content they post — with the exception of the license granted to Twitter and its partners”. YouTube, however, can be allotted the status of an intermediary and consequently cannot be held liable for third party content i.e. user generated content in accordance with several case law both national and international such as Frank Peterson v Google LLC, YouTube LLC, YouTube Inc., Google Germany GmbH and Elsevier Inc. v Cyando AG[1], Shreya Singhal Vs. UOI[2], Myspace v. Super Cassettes Industries Ltd[3] and Christian Louboutin SAS v. Nakul Bajaj as it merely plays the part of a facilitator rather than an “active role”. The primary issue with YouTube has been the excessive copyright violations on their platform, which is why as of 9th July, 2019, the platform introduced a new version of their content protection policies which entailed a much more comprehensive uploading and takedown procedure. The new procedure implements a content identification tool called ‘Content ID’ which requires the copyright holders to provide YouTube with exact time stamps for all new content identification claims. YouTube’s policy describes this tool as a “digital fingerprinting system that allows rights holders to upload content that they have exclusive rights to as reference files, and then scans videos uploaded to YouTube for matches of that content. When a user uploads content, Content ID scans against the database for matching videos”. The object that it seeks to achieve is for the copyright holders to identify and locate the exact portion of time for which their video is under a particular claim. However, this process is only applicable for manual individual copyright claims rather than extending it to their AI based algorithm put in place for flagging infringing content. Further, another point of contention is the storage of the user generated content after the deletion of an account. The Terms of Service state that the content would be stored in the platform’s database for the time period of the account itself, and once the account is deleted, YouTube would automatically initiate their removal process. However, a sentence in that particular clause that requires deliberation is the following: “The licenses granted by you continue for a commercially reasonable period of time after you remove or delete your Content from the Service. You understand and agree, however, that YouTube may retain, but not display, distribute, or perform, server copies of your videos that have been removed or deleted.” A literal interpretation of this sentence would indicate that YouTube as a data controller will indefinitely be in possession of user generated content post deletion/removal of a particular account. This can be deemed dangerous in a plethora of situations, a more likely instance of say hacking into their database and servers. Instagram Instagram is another widely used social media application which solely functions on user generated content. It’s Terms of Service in comparison has proved to be one of the most simply worded Terms of Service, which consequently makes it easier for a person not having a legal background to comprehend the same. Similar to YouTube’s Terms of Service, Instagram in its capacity of a service provider does not claim ownership of the user generated content. “We do not claim ownership of your content, but you grant us a license to use it. Nothing is changing about your rights in your content. We do not claim ownership of your content that you post on or through the Service. Instead, when you share, post, or upload content that is covered by intellectual property rights (like photos or videos) on or in connection with our Service, you hereby grant to us a non-exclusive, royalty-free, transferable, sub-licensable, worldwide license to host, use, distribute, modify, run, copy, publicly perform or display, translate, and create derivative works of your content (consistent with your privacy and application settings). You can end this license anytime by deleting your content or account. However, content will continue to appear if you shared it with others and they have not deleted it” The only debatable aspect of this clause may be the reproduction of derivative works from the user generated content, however a devil’s advocate argument would be that the same action is in accordance with their Terms of Service and in abidance of users’ ‘privacy and application settings’ which would be deemed to be implied consent. With regard to deletion of the account and consequent storage of the user generated content, the Terms of Service state: “Content you delete may persist for a limited period of time in backup copies and will still be visible where others have shared it.” It is plausible to assume that shared user content would not be removed in the interest of other users and their rights. However, the estimated time for user’s own content is vaguely worded and fails to provide with a specific period for the said deletion. TikTok TikTok, in comparison to other similar apps which solely function on user generated content, arguably maintains an exceedingly arbitrary and service provider oriented Terms of Service. In the above-mentioned Terms of Services that have been discussed, they acknowledged the implied presence of moral rights as part of the unreduced ownership rights. However, on the contrary, by agreeing to Tiktok’s clickbait agreement, a user consents to the following: “By posting User Content to or through the Services, you waive any rights to prior inspection or approval of any marketing or promotional materials related to such User Content. You also waive any and all rights of privacy, publicity, or any other rights of a similar nature in connection with your User Content, or any portion thereof. To the extent any moral rights are not transferable or assignable, you hereby waive and agree never to assert any and all moral rights, or to support, maintain or permit any action based on any moral rights that you may have in or with respect to any User Content you Post to or through the Services.” Literal interpretation of this waiver clause translates into surrendering of all the rights vested in the user generated content and transforms it into a property of the platform. The stakes are further raised as unlike the other platforms discussed above because when a user records a live audio to use as background sound for their content using TikTok’s sound recording feature, TikTok also obtains rights to such voiceover to use it in forms other than that user’s own content. This process makes an original composition authored by the user also a property of the platform, with no recourse available for the user. It is also interesting to note that this feature in itself endangers the platform’s ‘intermediary’ status as it possesses overreaching powers that a facilitator normally wouldn’t. Further, it can be deemed that the platform plays an ‘active role’ with regard to user generated content especially when compared the interface of the application with the definition of an intermediary in accordance with Section 2 (1) of the Information Technology Act, 2000. This argument is reaffirmed in an alarming sentence mentioned below the paragraph quoted above which states: “We, or authorised third parties, reserve the right to cut, crop, edit or refuse to publish, your content at our or their sole discretion.” With this the users award the platform AND third parties with arbitrary power and discretion. The Terms of Service of this platform also does not mention the aftermath of the deletion of an account with regard to the user generated content uploaded on the platform. This is evidently contrary to its competitors who have specifically provided a plan of action after a particular account has been requested for removal/deletion. As is further provided in their Terms of Service: “Tiktok may enter into mutual contractual agreement with some creators, where TikTok may enjoy certain exclusivity rights over the content of these creators. In this regard, TikTok has undertaken legal action as part of its commitment to protect its users from copyright infringement.” Further, the platform who is ironically functioning on user generated content, conveniently absolves itself of reverting to any form of feedback and concerns posted on the platform in the interest of the user community: “we have no obligation to review, consider, or implement your Feedback, or to return to you all or part of any Feedback for any reason” The exploitation is further not limited to user generated content, but extends to the feedback posted by user as well: “You irrevocably grant us perpetual and unlimited permission to reproduce, distribute, create derivative works of, modify, publicly perform (including on a through-to-the-audience basis), communicate to the public, make available, publicly display, and otherwise use and exploit the Feedback and derivatives thereof for any purpose and without restriction, free of charge and without attribution of any kind, including by making, using, selling, offering for sale, importing, and promoting commercial products and services that incorporate or embody Feedback, whether in whole or in part, and whether as provided or as modified.” Apart from this TikTok has been the centre of numerous data privacy concerns which led to its consequent ban in India. This also makes compliance with European Union’s General Data Protection Regulations difficult as any data controller is required to specify the nature, extent, purpose, and duration of the data collected or processed by them. Additionally, it has been noted that an online platform goes beyond the role of an intermediary when it modifies or re-transmits user generated content[4]. Thus, TikTok’s overarching Terms of Service might disallow the platform to rely on the safe harbour protection granted to intermediaries. Conclusion With the fast growing age of social media, the scale of responsibility of protecting consumer data should ideally be balanced. With the large influx of date on these platforms, the user in itself should entail a certain degree to responsibility and cautiousness as to the content that he/she choose to upload on these platforms since data, being an intangible yet precious commodity, is vulnerable for exploitation at every step. [1] C-682/18 Frank Peterson v Google LLC, YouTube LLC, YouTube Inc., Google Germany GmbH and C-683/18 Elsevier Inc. v Cyando AG [2] Shreya Singhal vs. Union of India AIR 2015 SC 1523. [ Section 79(3)(b) has to be read down to mean that the intermediary upon receiving actual knowledge that a court order has been passed asking it to expeditiously remove or disable access to certain material must then fail to expeditiously remove or disable access to that material. This is for the reason that otherwise it would be very difficult for intermediaries like Google, Facebook etc. to act when millions of requests are made and the intermediary is then to judge as to which of such requests are legitimate and which are not.] [3] Myspace v. Super Cassettes Industries Ltd. MIPR2 on (2) 303 [Under the DMCA, a “red flag” test, which has both subjective and objective elements. In determining whether the service provider was aware of a red flag, the subjective test of whether under the given facts and circumstances a reasonable observer could discern infringement, however such determination should be arrived at by using an objective standard.] [4] C-682/18 Frank Peterson v Google LLC, YouTube LLC, YouTube Inc., Google Germany GmbH and C-683/18 Elsevier Inc. v Cyando AG. The operative part can be found in paragraph 75 wherein it is stated that: “On the other hand, a service provider goes beyond the role of an intermediary where it intervenes actively in the ‘communication to the public’ of works. That is the case if the provider selects the content transmitted, determines it in some other way or presents it to the public in such a way that it appears to be its own. In those circumstances, the provider carries out the ‘communication’, together with the third party that initially provided the content. This is also the case if that same provider, on its own initiative, makes further use of that ‘communication’ by retransmitting it to a ‘new public’ or via a ‘different technical means’.”
- Spotify vs Apple: Time to Play Fair
Posted on July 20, 2020 Authored by Tanya Garg, recent law graduate from Symbiosis Law School Pune. Image Source: techcrunch.com Apple Inc. has been under fire several times before for the preferential way in which it operates marketplaces by setting rules in a way that benefits the company at the expense of others. After many years of discriminatory practices faced on the Apple App Store (“App Store”), Spotify Technology S.A. (“Spotify”) filed a formal antitrust Complaint against Apple, on 13 March 2019. Over a year after the Complaint was filed, on 16 June 2020, the European Commission (“Commission”) opened formal antitrust investigations to determine whether the App Store’s rules for app developers on the distribution of apps violate the European Union’s competition rules. The Commission’s ruling will give clarity on the issue of online platforms’ accountability while providing preferential treatment and pave the way for a level playing field. In a press release, Spotify founder, Daniel Ek said that Apple has been acting both player and referee to deliberately disadvantage other app developers. This was evident when Apple eliminated its payment system choice and replaced it with a mandatory purchase system built in the App Store itself and charging 30% commission on the payment transactions limited to content-based apps like Spotify. Thus, for Spotify to use Apple’s billing system in order to allow its customers to upgrade to its ‘Premium’ service (which is an ad-free subscription-based), Spotify and others would have to pay 30% of any subscription fees. This is the first of many moves made by Apple to dissuade customers from subscribing to Spotify. According to Spotify, this action has only been taken after attempting to unsuccessfully to resolve the issues directly with Apple. In response to Spotify’s claims, Apple has stated that Spotify, while owing much of its success today to the App Store ecosystem, has not contributed enough to the maintenance and development of that marketplace. This investigation will also help address the broader discussion on reviewing the suitability of the legal framework in the E-Commerce Directive for digital services in the European Single Market. Investigation The Commission shall investigate whether App Store restrictions imposed by the App Store breaches EU competition rules under Article 101 and/or Article 102 of the Treaty on the Functioning of the European Union (“TFEU”). Article 101 TFEU prohibits restrictive agreements between independent undertakings which may affect trade between Member States with object or effect the prevention or restriction of competition in the market, where that effect is appreciable. Such agreements include those between independent market operators acting either at the same level of the economy, as potential competitors or at different levels, as a manufacturer and distributor.\ On the other hand, Article 102 TFEU prohibits dominant undertakings from abusing their market power by imposing unfair prices or trading provisions. However, it is pertinent to note that Article 102 TFEU aims to protect effective and healthy competition in the internal market. Therefore, as laid down in Post Danmark, in order for Apple to dodge liability, it must demonstrate that its conduct is objectively necessary or that the exclusionary effect produced may be counterbalanced or even outweighed, by benefits via-a-vis efficiency and to consumers. The Commission’s investigation will concentrate on the following three restrictions imposed by Apple on the distributors of apps in the App Store Guidelines: The mandatory use of Apple’s in-app purchase system (“IAP”) for the distribution of paid digital content; The 30% commission charged to app distributors by Apple for all subscription fees paid through IAP causing several competitors to disable in-app subscription features or raise their subscription fees and pass those costs onto consumers; and Rules preventing developers from informing users of alternative purchasing possibilities outside of apps, such as on their websites, which are often cheaper. The investigation will determine the following: Whether the contractual obligations imposed by Apple on app developers has an appreciable effect on competition in the market for music streaming services and e-books, by giving Apple’s own apps an unfair advantage; and Whether Apple has been abusing its dominance as the only app store supplier for iOS devices, by dictating the terms of the App Store, to manipulate competition in the market. Anti-Competitive Agreement It is pertinent to note that Article 101 of the TFEU prohibits agreements “which have as their object or effect the prevention, restriction or distortion of competition within the internal market”. It has been noted that an agreement restrictive by its ‘object’ depends on the content of the agreement itself and restrictiveness of the ‘effect’ depends upon the consequences of the agreement. The Commission has also previously taken the position that the complaining party may demonstrate either the restrictiveness of the object or the effect, and does not necessarily need to show those conditions cumulatively[1]. However, in some cases, the Commission has also noted that agreements restrictive by object may not be per se anti-competitive and the appropriate economic context must be taken into account[2]. Nonetheless, there appears to be a strong argument in favour of Spotify to demonstrate that the App Store Guidelines are restrictive by their object, i.e., they provide discriminatory and favourable treatment to certain apps owned by Apple and thereby result in ‘prevention, restriction, or distortion of competition within the internal market’. Abuse of Dominant Position It becomes essential for the Commission to define the relevant market in question to establish whether an appreciable impact on competition has occurred and whether Apple holds a dominant position in that market. While doing so, a point of debate may be whether the relevant market in which Apple is allegedly abusing its dominance is specifically the app store for iOS devices or whether it includes app stores for other operating systems as well. The Commission focuses on cases where the probability of uncovering anti-competitive conduct is high; there exist a likelihood of harm to the consumer; and a significant impact on the functioning of competition in the market. Where an infringement is found, the Commission has the power to impose fines and orders to end the anti-competitive practices. The investigation is of interest as the Commission has been coming down heavily on companies like Google and Android by imposing a fine of €2.42 billion and €4.34 billion, respectively. The commission imposed hefty measures which includes €2.42 billion fine on Google for abusing its dominance as a search engine by giving an advantage to its own shopping service; and a fine of €4.34 billion for imposing illegal restrictions on Android device manufacturers and network operators to divert traffic to Google’s search engine. Effect on the Industry It is vital to acknowledge that Apple operates a platform that, for numerous people around the world, is the gateway to the internet. However, Apple has introduced rules to the App Store that influence consumers’ choice and suppress innovation. Being the owner of the iOS platform and the App Store and a competitor to services like Spotify, they grant themselves unfair advantages. This investigation stems from the long-standing rivalry between the music streaming services Spotify and Apple Music. At the core of Spotify’s case is that, obligation of App Store rules cast on competitors but not Apple Music. It is maintained that this is not about Apple Music vs Spotify, but about giving a fair footing to all players in the market. As a testament to this sentiment, several other music streaming services have echoed and backed Spotify’s complaints. Apple’s behaviour is the subject of pending FTC investigations in US, and most recently in the Netherlands. According to data published by Spotify, it has more than 90 million monthly subscribers, a substantial portion of which undoubtedly use iOS. Due to the sheer volume of users, the consequences of a breach of the European Union competition rules may prove to have significant consequences for mobile app stores, should the European Commission find Apple’s behaviour anti-competitive. As avid users of both the iOS operating system and Spotify, the authors seek to provide an alternative line of reasoning to suggest that Apple may be breaching EU competition rules. This decision will pave the path for fair and homogenous rules to all companies along fostering a healthy market. Additionally, consumers will be able to choose an app will be chosen purely on merits. Healthy competition and rivalry benefit all the players and their consumers by providing lowered entry barriers to the market, increased choices, lower subscription prices, and innovative music streaming services. Hopefully, after the Commission’s decision, new players will emerge in the market not only giving consumers a greater degree of selection by dismantling Apple’s dominance in the music streaming industry! [1] Case 56/64, Consten SaRL and Grundig GmbH v Commission (1966). [2] Case 382/12 P, MasterCard and Others v. Commission (2014).
- Sweeping Government Ban on Chinese Apps under the IT Act – Tyranny or Safeguarding?
Originally posted on 30 June 2020 Authored by Tanya Varshney Source: lokmarg.com The breaking news of the Central Government’s ban on 59 Chinese apps, including the popular TikTok, CamScanner, Shein etc., took the nation by a storm on 29th June 2020. The Ministry of Electronics and Information Technology (MEIT) passed this sweeping ban by way of a Press Release under Section 69A of the Information Technology Act (IT Act) read with ‘relevant provisions’ (unspecified which relevant provisions in the order) of the Information Technology (Procedure and Safeguards for Blocking of Access of Information by Public) Rules, 2009 (Blocking Access Rules). The broad reasons providing for this ban were ‘sovereignty and integrity of India, defense of India, security of state and public order’. In recent times, the Chinese apps, and particularly TikTok, have been under the heat for allegedly collecting unauthorized data which could potentially be used against national security. In United States of America, in a meeting with government officials and law makers, senator Josh Hawley discussed the dangers of the TikTok app on the devices of government officials and their families while addressing potential leaks of sensitive information. The United States also witnessed some states in favour of legislations banning TikTok, at least for federal employees. In this article, I examine the scope of restrictions under the IT Act along with the jurisprudence on blocking access and right to freedom of speech and expression. Access to Online Apps – Fundamental Right? Section 69A of the IT Act authorizes the Central Government to issue directions for blocking public access to any information through any computer resource where it is satisfied that “it is necessary or expedient so to do, in the interest of sovereignty and integrity of India, defense of India, security of the State, friendly relations with foreign States or public order or for preventing incitement to the commission of any cognizable offence relating to above”. Constitutionality of this provision was challenged before the Supreme Court in the landmark case of Shreya Singhal v. Union of India where the role of online intermediaries and freedom of speech was balanced against Sections 66A, 69A and 79 of the IT Act. The court noted the Statement of Objects and Reasons in the Amendment Act introducing this provision which justified the inclusion of penal provisions in the IT Act to combat the rapid increase of cyber crimes such as voyeurism, breach of confidentiality, data leakage by intermediaries, e-commerce frauds, phishing, identity theft, etc. As noted above, Section 69A authorizes the Government to block access to any information. While it is clear that the right to freedom of speech and expression under Article 19 of the Indian Constitution envisages the right to disseminate information, there is also some jurisprudence that even access to information is included under the right to freedom of speech and expression. Internationally, the UN General Assembly Special Rapporteur’s report has emphasized on the importance of access to the internet and has noted that “there should be as little restriction as possible to the flow of information on the Internet, except under a few, very exceptional and limited circumstances prescribed by international law for the protection of other human rights”. The United Nations has relied on a three part test, that is, – (a) the restriction to freedom of speech and expression must be provided by a law accessible to the public; (b) the restriction must pursue a legitimate objective such as reputation of others, public order, morals, national security, etc.; and (c) restriction must be necessary, proportionate and the least restrictive means to achieve the legitimate objective. In India, the Supreme Court in S. Rangarajan v. P. Jagjivan & Ors also noted that when freedom of speech and expression is restricted to prevent an anticipated danger, this “anticipated danger should not be remote, conjectural or far-fetched”. In Ramji Lal Modi v. The State of U.P., the court noted that there must be a ‘tendency’ to cause public disorder. In Shreya Singhal, the court held Section 66A to be unconstitutional and violative of Article 19(1)(a) due to its vague language and broad scope. However, Section 69A provides the grounds for blocking access similar to the reasonable restrictions envisaged under Article 19(2). Thus, there is a legitimate objective enshrined under Section 69A. Now the pertinent question to answer is whether the Government’s ban on 59 apps was an excessive restriction to pursue the legitimate objective and whether there was an imminent threat which justified this sweeping ban. Scope of Blocking Access Rules Under the Blocking Access Rules, the Designated Officer appointed by the Central Government can issue directions to block access to information as per Section 69A. Rule 5 states that the Designated Officer may direct any agency of the Government to block access to information on receipt of a complaint from the appointed officer of an organization or a competent court. Based on a reading of this rule, it is unclear whether the Government or the Designated Officer has suo moto powers to issue directions blocking access to online content. Further, Rule 8 also states that the Designated Officer has to issue a notice to the intermediaries or controllers of the computer resource to submit their reply and clarifications. It is also mentioned that this notice requirement applies to foreign entities as well. However, these Rules also allow for emergency blocking under Rule 9. The Designated Officer may issue directions to block access without complying with the procedure laid down in Rule 8 in cases of emergency where “no delay is acceptable”. The blocking request must be within the scope of the grounds stated under Section 69A and it must be “necessary and expedient” to block such information. It is also noted that the blocking orders must be examined by a Review Committee at least one in two months. In Shreya Singhal, it was argued that Section 69A and the Blocking Access Rules do not afford any ‘pre-decisional hearing’ to the intermediaries and that the procedural safeguards are not sufficient and proper. However, the Supreme Court opined that the scope of Section 69A is not as broad and vague as 66A. The court also noted that “blocking can only be resorted to where the Central Government is satisfied that it is necessary so to do” and that the grounds for necessity are similar to the reasonable restrictions stated under Article 19(2). Moreover, the blocking order must provide reasons for such blocking in writing which could be challenged in a writ petition under Article 226 of the Constitution. In light of these reasons, the Supreme Court upheld the validity and constitutionality of Section 69A read with the Blocking Access Rules. However, it must be noted that these provisions afford the Government to make a decision whether blocking is justified “in their opinion”. One may ask – whether this is too much law-making power at the hands of Government? The Press Release by MEIT stated that they received “many complaints from various sources including several reports about misuse of some mobile apps available on Android and iOS platforms for stealing and surreptitiously transmitting users’ data in an unauthorized manner to servers which have locations outside India”. There have also been serious border tensions between India and China which have also resulted in severe casualties. Undeniably, there is a situation with potential threat to India’s national security. However, these circumstances alone should not be enough to justify such a sweeping ban. There must be objective reasons indicating that these content platforms are posing a threat to national security and their blocking is ‘necessary and expedient’. What Lies Ahead – International Conflict? General Comment 34 to the International Covenant on Civil and Political Rights (ICCPR) (ratified both by India and China) clarifies that suspending access to or blocking of online platforms is an interference to the right to freedom and expression. Not only does the blocking order interfere with the right of users to access these online platforms, but it also interferes with the online platforms’ right to disseminate their content. It is certainly a possibility that this dispute may reach the international courts soon – even if not by China but by one of the 59 apps which have been blocked. Based on the international jurisprudence, various judgments of the European Court of Human Rights[1] have noted that there must be a ‘pressing social need’ to justify such restrictions. The other point of contention is that the restriction must be the least restrictive means to pursue the objective[2]. On the other hand, there are concerns of data privacy which must be balanced against the right to access. Intermediaries who process and collect ‘unauthorized data’ beyond the scope of their consent terms that are usually captured by way of click-wrap agreements should also be held responsible for unlawful data processing. However, lack of any examination reports by the Government of India at this stage make it difficult to assess whether there was scope of unauthorized data processing or not. Many have criticized this order for giving vague and unsubstantiated reasons while others have supported this decision in light of the tensions with China. However, objectively, this decision may set a dangerous precedent for the future to allow such sweeping bans on grounds such as sovereignty, public disorder and national security. It would be interesting to see whether this decision reaches the Indian courts or International courts and what the outcome of that would be. Though for now, there appear to be legitimate reasons that justify this blocking but the question of ‘necessity’ remains open. [1] See Lingens v. Austria (8 July 1986) Series A no 103, para 39; Handyside v UK (7 December 1976) Series A no 24, para 48. [2] Mouvement Raëlien Suisse v Switzerland App no 16354/06 (ECtHR, 13 July 2012), para 75; Morice v France App no 29369/10 (ECtHR, 23 April 2015), para 127
- Blockchain and the Data Protection Dilemma: Compatibility of GDPR with Blockchain System
Originally posted on 27 June 2020 Authored by Tanya Varshney Image Source: ibm.com The European Union’s General Data Protection Regulations (“GDPR“) seeks to regulate the way personal data is stored, collected, processed etc. by a “data controller”. A controller is a natural or legal person or any authority that determines the purposes and means of the processing of personal data[1]. Such a data controller must comply with the GDPR obligations. However, when it comes to blockchain servers, there are some grey areas with their compatibility with the GDPR. The European Parliamentary Research Service (“EPRS“) released a study on “Blockchain and the General Data Protection Regulation”. This study firstly acknowledges that since blockchains are distributed databases, the challenges in complying with the GDPR would depend on the specific technical design of the blockchain server. Blockchain, in simple terms, is a decentralized digital database. Since it is ‘decentralized’, it is hard to determine who is actually a data controller. Moreover, some GDPR obligations such as data erasure, modification, minimization, etc. become difficult to comply with given that data is stored on a chain of blocks across multiple servers. This article seeks to discuss the challenges with respect to applicability of the GDPR to blockchain servers and potential solutions for the same. “Territorial Applicability” Applicability of the GDPR is limited by two factors – the territorial extent and the nature of data. This section discusses the territorial scope of GDPR with respect to blockchain servers. The GDPR applies where processing of ‘personal data’ occurs (a) in the context of activities of an establishment of the data controller or processer in the European Union; or (b) processing of personal data of such subjects who are in the European Union[2]. In simple terms, GDPR would be applicable when data is collected for activities occurring in European Union (for example, personal or behavioral information collected as web cookies while browsing a website selling clothes in the European Union regardless of whether the controller or processor are in the EU), or when data of a natural person in the European Union is collected (for instance, data of a tourist in France collected by controllers or processors outside the EU as long as the data behavior that is being monitored occurs within the EU territory). Given the broad territorial scope, regardless of where the data is being stored or processed, the blockchain systems would come under the scope of GDPR when the data subjects or the company or authority relaying on blockchains or where specific activities such as sale of good or services are based in the EU. “Personal Data” The basic premise for the applicability of GDPR rests on the data processed being “personal data”, that is, the data must be in relation to an identified or identifiable natural person[3]. There are some concerns whether the GDPR would be applicable to blockchain servers where data is stored as encrypted or hashed data. The primary question would be whether the data stored would be identifiable to a natural person. GDPR recognizes the concept of ‘pseudonymisation’, that is, the processing of personal data in a way that it can no longer be identifiable to a specific data subject[4]. However, it must be noted that pseudonymised data is not precluded from the data protection measures under the GDPR but is acknowledged as a recommendation to reduce the risks with respect to privacy of the data subjects[5]. The EPRS also notes that even encrypted data would ‘likely’ qualify as personal data under GDPR as it is difficult to assess whether the encrypted data has been sufficiently anonymised. Additionally, Recital 26 to the GDPR also notes that pseudonymised personal data which could also be attributable to a natural person by use of any additional information would also have to comply with the GDPR obligations. Additional information could include internet protocol addresses, cookie identifiers, radio frequency identification tags, etc. as these may leave traces which may allow data to be attributable to a particular data subject[6]. Blockchain servers often use public keys which are essentially a string of letters and numbers that represent each user’s data – somewhat similar to an account number. There are also private keys, also letters and numbers, but somewhat similar to passwords. While the data stored in blockchain servers is encrypted or hashed, based on the technical design, such data could also be decrypted by the use of private keys. There is definitely some uncertainty regarding the degree of identifiability the data must have to come under the meaning of personal data. In this regard, the EPRS suggests that the appropriate test would be whether the controller or another person are able to identify the data subject in using all the ‘means reasonably likely to be used’. Data Storage, Modification, Erasure and Minimisation Chapter 2 of the GDPR lays down some key principles with respect to processing of personal data which include purpose limitation (the data must be processed for a specific purpose), data minimisation (personal data must be adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed), accuracy, storage limitation (data must not be stored longer than necessary) and consent. Given these requirements, when data is collected, the consent obtained must also delineate the way data is going to be stored based on the technical design of the blockchain server. There is also some uncertainty regrading the storage of data for a continued period of time in distributed ledger when the data was collected for perhaps a single blockchain transaction. As noted above, the compatibility of this with GDPR would depend on the nature of the consent contract between the controller and subject. There are also some concerns with data minimisation and accuracy when it comes to blockchain systems as most servers are designed in a way that data cannot be removed from the blocks. Potential Solutions There are definitely some grey areas with respect to applicability of the GDPR to blockchain systems due to the unique and complex technical designs. However, some steps may be taken in order to bring such systems into compliance with the GDPR. Some cryptocurrency services, such as Monero, use ‘secret keys’ for one-time transactions. When the purpose of data collection is to ascertain where a particular transaction has occurred or not, zero-knowledge proofs that provide binary answers (such as yes/no or true/false) can be used. Additionally, creation of editable block chains can solve some challenges that arise with respect to data minimisation and accuracy. That being said, the main thing to be kept in mind by data controllers using blockchain systems is that consent must be appropriately taken with respect to the specific design of the system. The terms should be intelligible and easily accessible, using clear and plain language[7]. Moreover, the GDPR also necessitates that the data subject must have the right to withdraw their consent anytime[8]. This may cause some problems where the blockchain servers are not editable. There must be a clear affirmative action while giving consent which would imply that even click-wrap and shrink-wrap agreements would be acceptable[9]. [1] Article 4(7), GDPR. [2] Article 3, GDPR. [3] Article 4(1), GDPR. [4] Article 4(5), GDPR. [5] Recital 28, GDPR. [6] Recital 30, GDPR. [7] Article 7(2), GDPR. [8] Article 7(3), GDPR. [9] Recital 32, GDPR.
- Healthcare in India and Disorders of the Central Nervous System: A Blockchain Solution
Posted on 26 June 2020 Authored by Tanya Varshney Introduction Medical institutions face difficulties with interoperability of healthcare records and complexities of data[1]. Aside from that, there is a problem of effective communication between various institutions such as hospitals, research centers, pharmacies and insurance companies. Concerns of patient confidentiality are also raised while addressing these issues. In India, there is an institutional divide in the healthcare system between the Government hospitals and private hospitals. This article addresses the shortcomings in the Indian healthcare system with a particular emphasis on neurological diseases and provides a solution to it through a blockchain model. Indian Healthcare System: Conversation with a Neurologist In an interview conducted with Dr. Manjari Tripathi[2] (Neurologist), she highlighted the difficulties in the healthcare domain. With respect to diseases of the central nervous system, Dr. Tripathi pointed out that MRI scan machines manufactured by different companies could lead to different diagnosis. This becomes a major problem because this often leads to misdiagnosis and incorrect treatment across different hospitals using different machines. This is an ongoing problem because of the interoperability of healthcare records to compare and study complex data essential to diseases of the central nervous system.[3] Upon further questions regarding diseases of the central nervous system, Dr. Tripathi pointed out that there is a lack of data regarding the population of people suffering with these diseases, and the extent of their disabilities. This problem is further heightened due to the lack of cooperation between Government hospitals and private hospitals. Lack of funding and the increasing demand for healthcare has put a burden on the Government hospitals and they are unable to provide adequate services or effectively organize healthcare records. Dr. Tripathi shared that Government hospitals such as AIIMS, New Delhi have thousands of patients on a daily basis who don’t have proper prior medical records. Doctors are not able to understand patient history and diagnose diseases efficiently. Research amongst these diseases is halted as doctors and researchers don’t have the required statistics and data. General practitioners are often unable to identify the clinical characteristics and patterns of healthcare utilization in neuropathic disorders[4]. A Blockchain Solution Blockchain is a peer to peer distributed ledger technology (“DLT”). Blockchain supports the use of “smart contracts” through which changes in records are updated and can be tracked through an automated decentralized system[5]. In simple terms, each member of the blockchain stores identical copy of the data contributing to certifying and validating transactions as well as updating the data.[6] Thus, DLT removes the need for intermediaries and provides a decentralized system for data storage. There are various advantages to implementing DLT in the Indian healthcare system particularly for neuropathic disorders. Firstly, it makes access to health records and statistics amongst medical and research institutions across India easier. Due to its decentralized nature, there is one storage space for medical data. There is no single source that claims authority over the true data and all members by consensus hold the data identically.[7] This removes all obstacles to patients acquiring copies of their healthcare records or transferring them to another healthcare provider.[8] This allows for improvements and advancements in the medical field as doctors now have access to the nature, types and extent of diseases in the country. This data may help them reach conclusions as to causation of diseases and possibly their cures. This also prevents misdiagnosis as doctors now have access to prior medical records of the patient and other patients who may have been diagnosed with the same disease. Thus, a DLT model in healthcare can help in creating a higher level of organization, accessibility, and amenability to time-saving digital tools while also further engaging the patient in their own care. Secondly, with respect to diseases of the central nervous system, obtaining disability certificates and consequently disability benefits are slightly more challenging due to the non-visual nature of the disabilities. The Persons with Disabilities (Equal Opportunities, Protection of Rights and Full Participation) Rules, 1996 provide the broad guidelines for issue of the disability certificates. As per a 2018 notification providing for the ‘Guidelines for Assessment of Extent of Disability and Certification of Specified Disabilities’, assessment of disability for neurological diseases largely depends upon the effects and fulfillment of certain functional impairments for different types of neurological disorders. Through a blockchain model, obtaining a disability certificate can be an automated procedure based upon the fulfillment of certain criteria which would be uploaded by the registered medical practitioners or institutions on the blockchain network. Once the required conditions are met and reflected in the patient’s health-records, they automatically become eligible for a disability certificate and the disability benefits. Thirdly, blockchain further provides a solution for efficiency in claiming medical insurance. In claiming medical insurance, the insurance companies require the insured to provide multiple documents regarding the health records, spending records, reports of scans, tests, etc. Blockchain can provide a solution to this by storing encrypted information about the insured’s age, residence, family income, and family composition, as well as medical information, such as disabilities.[9] Further, the blockchain model can also implement a digital signature or verification key to verify the authenticity of medical records. These speeds the process of claiming insurance and reduces costs for both the insurance companies and the insured persons. There may also be additional services such as notifying the owner of the profile when their insurance/disability/ medical status changes and automatically reapplying for benefits or insurance options[10]. Conclusion In this paper, we looked at the unique challenges of neurological disorders and disabilities. A blockchain model can engage healthcare providers, medical researchers, practitioners, medical institutions and help in understanding the impact of contributing factors to the patient’s health. The healthcare sector, in particular, would benefit from DLT as it would allow availability of ‘real-time data’ which would improve clinical care coordination, medical treatments in emergency situations, and aid researchers to detect those conditions which impact public health.[11] Implementation of a blockchain model in the Indian healthcare system will not only be beneficial for the organization and interoperability of data, but it would be extremely beneficial for the study, organization and cures of the diseases/disorders which are slightly more complex. [1] Mead, C. N. (2006). Data Interchange Standards in Healthcare-Computable Semantic Interoperability: Now Possible but Still Difficult. Do we really need a better Mousetrap?. Journal of Healthcare Information Management, 20(1), 71. [2] Dr. Manjari Tripathi, Professor Neurology, Department of Neurology, Neurosciences Centre, All India Institute of Medical Sciences, New Delhi. [3] Peterson, K., Deeduvanu, R., Kanjamala, P., & Boles, K. (2016). A blockchain-based approach to health information exchange networks. In Proc. NIST Workshop Blockchain Healthcare (Vol. 1, pp. 1-10). [4] Berger, A., Sadosky, A., Dukes, E., Edelsberg, J., & Oster, G. (2012). Clinical Characteristics and Patterns of Healthcare Utilization in Patients with Painful Neuropathic Disorders in UK General Practice: a Retrospective Cohort Study. BMC Neurology, 12(1), 8. [5] Ekblaw, A., Azaria, A., Halamka, J. D., & Lippman, A. (2016, August). A Case Study for Blockchain in Healthcare: “MedRec” Prototype for Electronic Health Records and Medical Research Data. In Proceedings of IEEE Open & Big Data Conference (Vol. 13, p. 13). [6] Linn, L. A., & Koo, M. B. (2016). Blockchain for health data and its potential use in health it and health care related research. In ONC/NIST Use of Blockchain for Healthcare and Research Workshop. Gaithersburg, Maryland, United States: ONC/NIST. [7] Engelhardt, M. A. (2017). Hitching healthcare to the chain: An introduction to blockchain technology in the healthcare sector. Technology Innovation Management Review, 7(10). [8] Ivan D . Moving Toward a Blockchain-based Method for the Secure Storage of Patient Records. ONC/NIST Use of Blockchain for Healthcare and Research Workshop. ONC/NIST; 2016. [9] Vian, K., Voto, A., & Haynes-Sanstead, K. (2016). A blockchain profile for medicaid applicants and recipients. Institute for the Future August, 8, 1. [10] Ibid. (n 5) [11] Ibid. (n 6)








