The Siskinds Privacy, Cyber and Data Governance team is focused on providing businesses and professionals with monthly updates on technology, privacy, and artificial intelligence (A.I.) laws in both the U.S. and Canada.
For August, we have many updates to share. From the Canadian Office of the Privacy Commissioner investigating Ticketmaster, to Google being found a monopolist, and to the Lone Star State continuing its assault against Google and General Motors.
But first, here’s our word of the month: “red teaming”
Commencing every month, we will also share a word that we come across in our studies that we find interesting. This month, we have chosen the word “red teaming,” which is a cybersecurity attack technique where the “red team”, who works on behalf of an organization, pretends to be the organization’s enemy and attempts to access, physically or electronically, the organization’s facility or systems. After the attack, the group then drafts and provides an intrusion report to the organization for the purpose of improving the organization’s defences. The term originates from the Cold War whereby the “red team” represented the Soviet Union and was used to improve NATO defence strategy and prevent false assumptions and/or groupthink. The term contrasts “blue teaming”, whereby the “blue team” represents the organization’s individuals who work to improve the defence of the organization.
General News: IBM’s data breach report; TikTok Lite, and more AI
MIT published an A.I. Risk Repository that provides a “comprehensive living database of over 700 A.I. risks categorized by their cause and risk domain.” This could be very helpful when used in conjunction with A.I. risk assessments.
July, 2024: IBM released its Cost of a Data Breach Report 2024 and its key findings are that (1) the average total cost of a data breach went up from USD 4.45 million in 2023 to 4.88 million in 2024; (2) A.I. employed in security and automation across a security operations center (SOC) saw a 10% reduction in costs; (3) 35% of breaches involved data stored in unmanaged data sources, and due to the data’s unmanaged nature, the breaches took longer to identify and contain; (4) the average cost of a malicious insider attach averaged at USD 4.99 million; and (5) in cases of ransomware, where law enforcement were involved early, the cost of the breach lowered by around USD 1 million.
July 20, 2024: TikTok released a “Lite” app to run better on lower bandwidth; it’s unavailable in the US, Canada, and the majority of Europe. According to Mozilla Research, TikTok Lite has significant safety concerns. For example, TikTok Lite users allegedly (1) “lack basic, proactive user controls at their disposal, including the ability to filter offensive content and unwanted keywords; and, lacks screen management tools that can mitigate app addiction”; and (2) “provides no warning labels or banners on a range of potentially harmful content, from dangerous prank videos and graphic content to health and elections-related misinformation and AI-generated content”
July 26, 2024: The US National Institute of Science and Technology (NIST) has released public drafts for the following A.I. guidance documents: Managing Misuse Risk for Dual-Use Foundation Models (NIST AI 800-1), and the following three final A.I. guidance documents A.I. Risk Management Framework: Generative A.I. Profile (NIST AI 600-1); Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NIST Special Publication (SP) 800-218A); and A Plan for Global Engagement on AI Standards (NIST AI 100-5).
August, 2024: Around 272 million unique US Social Security Numbers (in Canada, the equivalent is called “Social Insurance Numbers”) were leaked after a consumer data broker named NationalPublicData.com was breached by hackers. Fortunately, according to Krebs on Security, most of the leaked SSNs are connected to people who are likely deceased.
August 14, 2024: According to Reuters, Google is launching new Pixel smartphones leveraging A.I. in competition with Apple. I have not dug deep into this development, but I am curious on whether (1) Google is leveraging privacy enhancing technology (PET) like Apple; and (2) Apple’s PET is effective in preserving privacy through it’s A.I. implementation.
August 20, 2024: The United Kingdom’s Information Commissioner’s Office release a privacy notice generator for small and medium sized businesses to assist in privacy compliance.
Canada: Office of the Privacy Commissioner (OPC) (Canada) launches investigation into Ticketmaster; Canadian border guards cannot search devices without reasonable suspicion of violations of the law
July, 2024: Quebec issued its Governmental Cybersecurity and Digital Strategy. FYI, it is in French. The Strategy, essentially, discusses how the public administration is to improve cybersecurity of online offerings, digitally unify the public administration, develop secure infrastructures, etc.
July 31, 2024: According to the Ottawa Citizen, the Canadian Forces monitor its personnel’s social media use for “unauthorized disclosures.”
July 31, 2024: Following Ticketmaster Canada disclosing their cybersecurity incident, the OPC of Canada has launched an instigation into Ticketmaster to “examine the company’s practices with respect to security safeguards and whether the company complied with breach notification requirements.”
August 9, 2024: The Ontario Court of Appeal (the ONCA) held that Section 99(1)(a) of the Customs Act to be unconstitutional (a violation of Section 8 of the Charter of Rights and Freedoms) because it authorized border officers to search the devices of travelers without any reasonable basis. The ONCA noted that “[a] reasonable search in this context requires a reasonable suspicion, meaning objective facts that establish a reasonable possibility that officers will find evidence of border law violations on the device.” R. v. Pike, 2024 ONCA 608.
August 26, 2024: Park’N Fly, an airport parking service, recently suffered a data breach, exposing the Personal Information of around one million Canadians and included “the names, email and mailing addresses, and Aeroplan and CAA numbers.”
August 28, 2024: The OPC and the US Federal Communications Commission (FCC) signed a memorandum of understanding to “strengthen information sharing and enforcement cooperation between the two regulators.”
United States: “Google is a monopolist, and it has acted as one to maintain its monopoly”; Patreon settles Video Privacy Protection Act (VPPA) Class Action for $7.25 Million.
August 5, 2024: The US District Court for the District of Columbia has ruled that “(1) there are relevant product markets for general search services and general search text ads; (2) Google has monopoly power in those markets; (3) Google’s distribution agreements are exclusive and have anticompetitive effects; and (4) Google has not offered valid procompetitive justifications for those agreements. Importantly, the court also found that Google has exercised its monopoly power by charging supracompetitive prices for general search text ads. That conduct has allowed Google to earn monopoly profits.” United States of America v. Google LLC, 1:20-cv-03010-APM (DCDC August 5, 2024).
As a last note to this topic, Yelp also recently brought a complaint against Google alleging that it uses its search engine monopoly to promote its own reviews to keep consumers within Google’s “walled garden.”
August 5, 2024: According to MediaPost, Patreon agreed to settle claims it violated the Video Privacy Protection Act when it disclosed user’s viewing choices to Meta with its pixel tracking technology.
August 9, 2024: Fifth Circuit judge held in United States v. Jamarr Smith, Case No. 23-60321 (5th Cir. 2024) that geofence warrants are “unconstitutional under the Fourth Amendment” (which applies to Louisiana, Mississippi, and Texas). Geofence warrants are where ““[l]aw enforcement simply specif[y] a location and period of time, and, after judicial approval, companies conduct sweeping searches of their location databases and provide a list of cell phones and affiliated users found at or near a specific area during a given timeframe, both defined by law enforcement.” This decision arrives at a contrary conclusion than the Fourth Circuit in the United States v. Chatrie.
August 14, 2024: Switzerland certifies the US as offering adequate protection for personal data for “certified US companies”.
August 23, 2024: The US Department of Justice (DOJ), along with the AGs of eight states, filed an antitrust lawsuit against RealPage. The gist of the claim is that landlords share with RealPage confidential information about their apartment rental rates and other important lease terms to train RealPage’s algorithmic pricing software. “This software then generates recommendations, including on apartment rental pricing and other terms, for participating landlords based on their and their rivals’ competitively sensitive information.” See the Press Release. The consequence is that landlord can more easily co-ordinate to maintain or increase rents.
U.S. Federal Trade Commission (FTC) Enforcement; new rule regarding consumer reviews and testimonials.
August 14, 2024: The FTC published a new rule entitled, “Trade Regulation Rule on the Use of Consumer Reviews and Testimonials”, 16 C.F.R. Part 465. Generally, the new rule states that it is an unfair or deceptive act or practice for a business to write, create, or sell a consumer review, consumer testimonial, or a celebrity testimonial that materially misrepresents (1) that the review or testimonialist exists; (2) that the reviewer or testimonialist used or otherwise had experience with the product, service, or business; and (3) the reviewer’s or testimonialist’s experience with the product, service, or business. Additionally, businesses cannot provide compensation for positive consumer reviews, and the officers or managers of a business cannot write consumer reviews of consumer testimonials about the business or its products or services without disclosing their relationship with the business.
There are many other components of the rule that are not discussed above. For a more detailed summary, please see the FTC’s summary guidance.
August 15, 2024: The FTC and the State of Arizona brought a complaint against Coulter Motor Company, LLC for deceptive advertising, hiked prices, unauthorized add-ons, and discriminatory practices with a proposed settlement of $2.6 million. The defendants would advertise cars with low prices with the phrase, the “Coulter Price” (which they often advertised to be X below MRSP). Then consumers would be lured to the dealership. The FTC and the Arizona Attorney-General allege that the defendants would lock the consumers into time-intensive negotiations and then disclose to the consumer “previously unmentioned and contrived ‘market adjustment [fees],’ purportedly for preinstalled add-ones, and miscellaneous fees”, which increase[d] the cost of the vehicle. Additionally, the FTC and the Arizona AG allege that the defendants also charge consumers twice for certain add-ons: for example, “charging consumers $299 for VIN etching, in addition to a $696 charge for the Coulter Value Package, which includes VIN etching.” There are also allegations that the defendants charged Latino consumers more than non-Latino consumers. See the complaint here.
FCC Enforcement: $1,000,000 dollar fine for spoofing robocalls that used Generative A.I voice cloning technology.
August 21, 2024: The FCC settled with Lingo Telecom, LLC for $1 million alleging that it, a voice service provider, transmitted spoofed robocalls that used generative A.I. voice cloning technology in contravention with Caller ID authentication rules under 47 C.F.R. § 64.6301(a).
August 30, 2024: The DOJ filed a complaint (upon referral from the FTC) against Verkada Inc., a surveillance camera company, for failing to provide reasonable security for the live camera feeds from hospitals, health clinics, elementary schools, and prison cells. This failure allegedly allowed a threat actor to “remotely access Verkada’s customer camera feeds and watch consumers live, without their knowledge or consent.”
Illinois: Amendment to Biometric Information Privacy Act (BIPA) signed into law; new A.I. law in employment law.
August 2, 2024: As noted in my June 2024 Privacy Pulse, Governor J.B. Pritzker signed into law the amendment that narrowed BIPA to holding companies liable only for a single violation per person, rather than for each time biometric data is allegedly misused.
August 9, 2024: Illinois amended its Human Rights Act to include protections against A.I. The amendment makes it a civil rights violation, “[w]ith respect to recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment, for an employer to use artificial intelligence that has the effect of subjecting employees to discrimination on the basis of protected classes . . .” The amendment additionally requires employers to provide notice that the employer “is using artificial intelligence” for one of the above listed purposes. Interesting enough, it does not contain the impact assessment requirements which are found in NYC’s Local Law 144 and Colorado’s AI Act.
New York, Connecticut, and New Jersey Enforcement Action:
August 13, 2024: The New York, Connecticut, and New Jersey Attorney Generals (AG) settled with Enzo Biochem, Inc. for “failing to adequately safeguard the personal and private health information of its patients.” The AG found that “Enzo had poor data security practices, which led to a ransomware attack that compromised the personal and private information of approximately 2.4 million patients.”
Texas: Privacy Fights with Google and General Motors.
In last month’s Privacy Pulse, we discussed how Meta agreed to pay Texas $1.4 billion dollars to settle a dispute that Meta violated Texas’ Capture or Use of Biometric Information Act (CUBI). Just as a note to our readers, Texas is also currently suing Google under CUBI (among other causes of action). Google recently brought a motion seeking discovery of certain information from certain parties. One argument of note that Google made was that CUBI largely “sat unenforced for its first twenty years on the books” and that Texas has “excused non-compliance by its preferred commercial vendors by adopting interpretations inconsistent with those it now advance against Google.” See Texas’ recent response to Google’s motion.
August 13, 2024: In our March Privacy Pulse, we noted that General Motors (GM) and OnStart monitored your personal information, and that such sharing was, as reported by the New York Times, buried in OnStar’s and GM’s privacy notice. Then, in our June Privacy Pulse, we noted that the Texas AG opened an investigation into GM.
Now, the Texas AG has brought a lawsuit against General Motors for violating the Texas Deceptive Trade Practices Act alleging that the Defendants”:
- “falsely misrepresented the benefits and risks of its products and their related features . . . While touting the benefits of its products . . . [the] Defendants were silent as to the risks associated with their information sharing practices. Moreover, [the] Defendants repeatedly sold their data in a manner it knew could financially harm consumers through higher car insurance premiums, being dropped from coverage, or being denied coverage.”
- “self-servingly used the vast amount of data it collected about its customers to derive a profit by repeatedly selling its customers’ information to several different companies over the course of nearly a decade.”
- “entered into several unrelated agreements explicitly to sell customers’ information, none of which involved marketing activities. Defendants never disclosed to customers that their information would be sold for other purposes.”
- “misrepresented the purpose of the collection of data as being for the customer’s benefit, not other companies such as Insurers.”
- “used several false, misleading, and deceptive techniques to obtain customers’ ‘consent’ to Defendants’ collection and sale of their data, including through its utilization of an aggressive onboarding program that included misrepresenting to customers that its dealership onboarding process was a pre-requisite to taking ownership of their vehicles.”
- “purported to provide consumers with disclosures of their privacy practices, but utilized lengthy and confusing privacy statements that obfuscated Defendants’ practices.”
- “falsely represented that customers would be able to exercise control over the sharing of their data with insurance providers when such was not the case.”
Interesting enough, the Texas AG did not use its newly enforceable Texas Data Privacy and Security Act (and instead opted to use its Deceptive Trade Practices Act). However, this reaffirms that the underpinning of US consumer privacy law rests on agencies like the FTC and state-based equivalents for using their authority to counteract unfair and/or deceptive practices.
California: Age-Appropriate Design Code (AADC).
August 16, 2024: As many of you are aware, California recently passed the AADC (Cal. Civ. Code §§ 1798.99.28–1798.99.40) to “protect the kids” on the internet. It was largely modeled on the UK’s equivalent law. The AADC was it intended to ensure that online products, services, and features that are “likely” to be accessed by children are built in a manner that recognizes the needs of children. The AADC defines the contours of what “likely [is] to be accessed by children,” and a few of such contours are as follows: for the online product, service, or feature to be “routinely accessed by a significant number of children”; for the online service, product, or feature to be “directed to children”; or the online service, product, or feature has design elements known to be of interest to children.
Assuming your business’ online product, service, or feature is caught within the AADC’s cross hairs, there are several notable obligations, which include: (1) completing a Data Protection Impact Assessment for any new online product, service, or feature before it becomes public (which must be provided to the California Attorney General upon request); and (2) performing age estimation on users to have a reasonable level of certainty on who is a child vs. an adult (or otherwise provide children’s privacy protections to everyone).
As a side note, the second criterion raises massive privacy concerns. In order to have a reasonable level of certainty that someone is the age they claim to be, the business would have to somehow verify their age. The easiest method that comes to my mind would be to take the person’s driver’s license and a current photo of the person’s face, and use biometric processing to match the face on the photo to the face on the driver’s license. The issue with this method is that the business must collect and then use biometric information. Many jurisdictions have their own standalone laws for biometric information, and many others classify biometric information as sensitive personal information, both of which involve additional stringent requirements businesses have to abide by. Accordingly, business would be faced with a choice: (1) provide the level of privacy rights we expect for children to everyone, or (2) collect sensitive personal information to verify the person’s age (and abide by the stringent requirements that accompany the collection of such information).
Anyways, in NetChoice, LLC v. Bonta, 2024 WL 3838423 (9th Cir. Aug. 16, 2024), the Ninth Circuit held that the AADC’s Data Protection Impact Assessment requirement violated the First Amendment because it required businesses to “opine on and mitigate the risk that children may be exposed to harmful or potentially harmful materials online,” and that it “deputizes covered businesses into servicing as censors for the State and . . . determining whether material is suitable for kids.” The Ninth Circuit remanded to the lower District Court to determine whether the unconstitutional provisions were severable from the AADC (i.e., whether the court can keep the remainder of the AADC alive and constitutional).
Putting privacy first: Your path to compliance starts here
It is crucial for businesses to establish a comprehensive privacy program. At Siskinds, our Privacy Concierge program offers custom subscription programs.
To discover how Siskinds can assist you in meeting your privacy compliance needs, or if you have any questions related to this blog post, contact myself, Savvas Daginis at [email protected], or a lawyer on our Siskinds’ Privacy, Cyber & Data Governance Team.