The Siskinds Privacy, Cyber and Data Governance team is focused on providing businesses and professionals with monthly updates on technology, privacy, and artificial intelligence (A.I.) laws in both the U.S. and Canada.
For September, we have many updates to share: from hacking cars, to Quebec’s Data Portability Right coming into Force, and to the FTC’s “Operation A.I. Comply”.
Word of the Month: the “Shadow Data”
“Shadow Data” refers to data created or stored in systems that are not managed by an organization’s IT and are consequently unknown to the IT team. Examples include storing data on an employee’s personal phone, local unmanaged storage devices such as USBs, unapproved cloud storage services, etc. In the event of a data breach affecting Shadow Data, an organization may not know data was affected, or, if they do know a breach occurred, what data was affected.
General News: London, Ontario ranked #4 in emerging tech markets in North America
According to the CBRE, as reported by TechAlliance, London, Ontario has been ranked #4 in emerging tech markets in North America.
September 23, 2024: Telegram will take a more proactive approach in complying with government requests for access to information.
September 26, 2024: Security researchers found “a flaw in a Kia web portal that let them track millions of cars, unlock doors, and start engines.” Kia has since fixed the problem; however, this demonstrates a lesson: as products become feature more internet connectivity, there will be more access points that can be vulnerable to malicious actors.
Canada: Quebec’s Data Portability Provisions are now in force
September 9, 2024: The Federal Court of Appeal held that Facebook breached PIPEDA through its practice of sharing Facebook user’s personal information with third-party apps without meaningful consent, and that it breached its safeguarding obligations by failing to adequately monitor and enforce the privacy practices of third-party apps operating on its platform.
September 22, 2024: Quebec Law 25’s data portability provisions are now in force, which means individuals may not request access to their personal data and transfer it to another legally authorized organization of their choice. This also applies to employers.
September 27, 2024: The Commission d’accès à l’information du Québec published a Brief on its concerns for the privacy of young people in the digital environment. See its brief in French here.
United States: Investigation into Nvidia’s antitrust practices
August 29, 2024: The Federal Bureau of Investigation (FBI), the Cybersecurity and Infrastructure Security Agency (CISA), the Multi-State Information Sharing and Analysis Center (MS-ISAC), and the Department of Health and Human Services (HHS) released a joint advisory to disseminate known ransomware Indicators of Compromise and historically observed tactics, techniques, and procedures. See the RansomHub here.
September 3, 2024: Reuters has reported that the U.S. Department of Justice has subpoenaed Nvidia as it investigates Nvidia’s antitrust practices.
U.S. Federal Trade Commission Enforcement
September 19, 2024: The FTC made a staff report with recommendations to policymakers and companies, including the following:
- “Congress should pass comprehensive federal privacy legislation to limit surveillance, address baseline protections, and grant consumers data rights;
- Companies should limit data collection, implement concrete and enforceable data minimization and retention policies, limit data sharing with third parties and affiliates, delete consumer data when it is no longer needed, and adopt consumer-friendly privacy policies that are clear, simple, and easily understood;
- Companies should not collect sensitive information through privacy-invasive ad tracking technologies;
- Companies should carefully examine their policies and practices regarding ad targeting based on sensitive categories;
- Companies should address the lack of user control over how their data is used by systems as well as the lack of transparency regarding how such systems are used, and also should implement more stringent testing and monitoring standards for such systems;
- Companies should not ignore the reality that there are child users on their platforms and should treat COPPA as representing the minimum requirements and provide additional safety measures for children;
- The Companies should recognize teens are not adults and provide them greater privacy protections; and
- Congress should pass federal privacy legislation to fill the gap in privacy protections provided by COPPA for teens over the age of 13.”
September 25, 2024: The FTC is announcing “Operation A.I. Comply” and has taken action against five A.I. companies for deceptive and/or unfair practices.
First is an action against “DoNotPay”, a service that promised to be “the world’s first robot lawyer”. The business described its A.I. as “capable of performing legal services such as drafting ‘ironclad’ demand letters, contracts, complaints for small claims court, challenging speeding tickets, and appealing parking tickets.” The FTC investigated, and has alleged that:
“DoNotPay did not test whether the Service’s law-related features operated like a human lawyer. DoNotPay has developed the Service based on technologies that included a natural language processing model for recognizing statistical relationships between words, chatbot software for conversing with users, and an Application Programming Interface (“API”) with OpenAI’s ChatGPT. None of the Service’s technologies has been trained on a comprehensive and current corpus of federal and state laws, regulations, and judicial decisions or on the application of those laws to fact patterns. DoNotPay employees have not tested the quality and accuracy of the legal documents and advice generated by most of the Service’s law-related features. DoNotPay has not employed attorneys and has not retained attorneys, let alone attorneys with the relevant legal expertise, to test the quality and accuracy of the Service’s law-related features.”
See the Complaint for more claims and counts.
Second is an action against Ascend Ecom, which falsely claimed that its “’cutting edge’ A.I.-powered tools would help consumers quickly earn thousands of dollars a month in passive income by opening online storefronts. According to the complaint, the scheme has defrauded consumers of at least $25 million.”
The third is an action against Ecommerce Empire Builders, who falsely claimed that “consumers will generate substantial income from online stores that are ‘powered by artificial intelligence’ and Defendants’ ‘proven’ strategies. In truth, the promised profits never materialize[d] . . . Defendants’ income representations about their business opportunities and self-study programs [were] false or unsubstantiated.” The Defendants also used “non-disparagement clauses in their form contracts that prohibit[ed] virtually all communications or statements about the Defendants, which violate the Consumer Review Fairness Act.”
The fourth is Rytre, who marketed and sold an A.I. writing assistance service, one of which was for testimonial generation. According to the FTC, “Rytr’s service generated detailed reviews that contained specific, often material details that had no relation to the user’s input, and these reviews almost certainly would be false for the users who copied them and published them online . . . The complaint charges Rytr with violating the FTC Act by providing subscribers with the means to generate false and deceptive written content for consumer reviews. The complaint also alleges that Rytr engaged in an unfair business practice by offering a service that is likely to pollute the marketplace with a glut of fake reviews that would harm both consumers and honest competitors.”
The last is against FBA Machine, which appears to be similar to the claim against Ascend Ecom, namely that the A.I. was claimed to quickly earn lots of money in passive income from sales in online stores on e-commerce platforms.
U.S. Federal Communications Commission Enforcement: Settlements with AT&T and T-Mobile
September 17, 2024: The FCC announced a settlement with AT&T with a $13 million dollar settlement to revolve an investigation into the “company’s supply chain integrity and whether it failed to protect the information of AT&T customers in connection with a data breach of [one of its vendors’] cloud environment.” Specifically, AT&T’s vendor generated and hosted personalized video content for AT&T users. The contract between AT&T and the vendor mandated that the vendor were to have destroyed or returned the information when it was no longer necessary to fulfill contractual obligations, which ended years before the breach. Accordingly, “AT&T failed to ensure the vendor: (1) adequately protected the customer information, and (2) returned or destroyed it as required by contract.”
The lesson here to businesses is that, even if your data processing agreement covers you, you still need to take actions to enforce it because ultimately, you will be responsible when things hit the fan.
September 27, 2024: The FCC also announced a settlement with T-Mobile with a $15 million dollar settlement to resolve an investigation into T-Mobile’s alleged failure to protect the confidentiality of customers’ PI; impermissibly using, disclosing, or permitting access to individually identifiable CPNI without customer approval; and failing to take reasonable measures to discover and protect against attempts to gain unauthorized access to CPNI violate a carrier’s statutory duty under the Act and the requirements of the Commission’s CPNI.
California: Advisory into dark patterns, and new A.I. Laws
September 4, 2024: The California Privacy Protection Agency (CPPA) issued an Enforcement Advisory on “dark patterns”, which generally refer to user interfaces that “subvert or impair consumers’ autonomy, decision making, or choice when asserting their privacy rights or consenting. For example, when businesses provide choices to consumers, such as the option to opt-out of the sale or sharing of their personal information, businesses must present these choices in a clear and balanced way. If the choices are unclear, they might be considered dark patterns.” See also Civil Code § 1798.140(l); 11 CCR §§ 7003(a), 7004(a)(1), 7004(a)(2).
September 19, 2024: California passed three new A.I. laws: SB 942, which requires generative A.I. systems to include “provenance disclosures” (i.e., tags to allow others to identify A.I.-generative content) in the content they generate; SB 926, which creates a crime “targeting A.I.-generated sexually explicit deepfake content”; and SB 981, which requires “social media platforms to establish a mechanism for users to report sexually explicit deepfakes of themselves.
Utah: Court enjoins the “Minor Protection in Social Media Act”
September 10, 2024: As many of you are aware, Utah was one of those states that enacted a Social Media law that sought to “protect the kids” by requiring social media platforms to verify users’ ages and impose special restrictions on minors’ accounts. In March of 2024, Utah enacted the Utah Minor Protection in Social Media Act, that was to take effect on October 1, 2024. The Act, specifically, requires social media companies to (1) “implement an age assurance system to determine whether a current or prospective Utah account holder . . . is a minor”; the age assurance must have an accuracy rate of at least 95%; and then (2) once minors are segregated, then the social media company must enable strict privacy settings for those minors unless the social media company obtains verifiable parental consent, which includes disabling certain “features that prolong user engagement” (e.g., autoplay functions, scrolling that loads additional content, and push notifications) and presumed confidentiality for the minor’s personal information. The court enjoined the Act on First Amendment grounds. See NetChoice LLC v. Reyes, 2:23-cv-00911-RJS-CMR (D. Utah Sept. 10, 2024).
Importantly, I don’t think age assurance on the internet is a good idea because that would force platforms to verify a user’s age, which likely entails the collection and use of biometric information (which is sensitive personal information). Among the reasons biometric information is sensitive is that it is difficult for a user to change (e.g., imagine changing your facial geometry). If platforms are forced to collect biometric information, then that platform must take reasonable safeguards to protect the information. However, no matter the safeguards a platform may take, security incidents are inevitable, and thus it will only be a matter of time once our biometrics are leaked in a security incident, leaving consumers exposed unnecessarily (changing your social security / insurance number is one thing; changing your face, fingerprint, iris, etc. is another).
New York, Connecticut, and New Jersey Enforcement Action:
August 13, 2024: The New York, Connecticut, and New Jersey Attorney Generals (AG) settled with Enzo Biochem, Inc. for “failing to adequately safeguard the personal and private health information of its patients.” The AG found that “Enzo had poor data security practices, which led to a ransomware attack that compromised the personal and private information of approximately 2.4 million patients.”
Texas: More from the Lone-Star State
September 18, 2024: Texas AG settled with Pieces Technologies, an artificial intelligence healthcare tech. company, resolving allegations that it made a “series of false and misleading statements about the accuracy and safety of its products.” The company alleged that its A.I. was “’highly accurate,’ including advertising and marketing the accuracy of its products and services by claiming an error rate or ‘severe hallucination rate’ of ‘<1 per 100,000.’” The Texas AG’s investigation found that this was likely false, and the company agreed to disclose the true extent of its products’ accuracy. What’s the lesson here? Don’t make claims unsupported by the evidence.
October 3, 2024: Texas AG sues Tik-Tok for sharing the personal information of minors. Check the press release here.
Putting privacy first: Your path to compliance starts here
It is crucial for businesses to establish a comprehensive privacy program. At Siskinds, our Privacy Concierge program offers custom subscription programs.
To discover how Siskinds can assist you in meeting your privacy compliance needs, or if you have any questions related to this blog post, contact myself, Savvas Daginis at [email protected], or a lawyer on our Siskinds’ Privacy, Cyber & Data Governance Team.