You are viewing an archived webpage. The information on this page may be out of date. Learn about EPIC's recent work at epic.org.

United States v. Wilson

Whether the Fourth Amendment permits constant scanning of images uploaded to Google with corresponding reports automatically sent to law enforcement, absent evidence establishing that the underlying algorithm is accurate and reliably detects only images previously viewed by Google employees
  • Ninth Circuit Says Warrantless Search of Google Files Automatically Reported to Police Violated Fourth Amendment: The Ninth Circuit announced today police violated a defendant’s Fourth Amendment rights when they warrantlessly searched files that Google automatically reported using a proprietary algorithm designed to detect child sexual abuse material (“CSAM”). Prosecutors in the case, United States v. Wilson, had argued that the police officer’s search of the defendant’s files did not violate the Fourth Amendment because Google, a private party, had conducted the initial search. The district court agreed, finding that there was a “virtual certainty” that the files Google sent to police were identical to files previously identified by a Google employee as CSAM. But no Google employee reviewed the defendant's files before sending them to police—instead, Google automatically forwarded the files to law enforcement after a proprietary algorithm matched the files to previously-identified CSAM images. EPIC filed an amicus brief in the Ninth Circuit appeal to explain that prosecutors had failed to show that the proprietary Google algorithm reliably matched images. EPIC also urged the court to narrowly apply the private search exception. The Ninth Circuit found that the police search “allowed the government to learn new, critical information” and “expanded the scope of the antecedent private search because the government agent viewed Wilson’s email attachments even though no Google employee—or other person—had done so.” The Ninth Circuit also echoed EPIC’s amicus brief: “on the limited evidentiary record, the government has not established that what a Google employee previously viewed were exact duplicates of Wilson’s images.” The decision in this case diverges from previous federal appeals and state court decisions on the issue and may lead the Supreme Court to review the important privacy implications of mass automatic file scanning programs. (Sep. 21, 2021)
  • More top news »
  • Senator Markey, Rep. Matsui Introduce Bill to Increase Transparency and Decrease Discrimination in Algorithms » (May. 27, 2021)
    Senator Ed Markey (MA) and Representative Doris Matsui (CA) introduced the Algorithmic Justice and Online Transparency Act of 2021 today. The bill prohibits discrimination based on protected classes for algorithmic processes on online platforms, requires online platform companies to create and maintain documentation about their algorithms for review by the FTC, and sets out a standard for what safe and effective algorithmic processes would be. The bill also calls for the creation of an inter-agency task force to investigate discriminatory algorithmic processes including the Federal Trade Commission, Department of Housing and Urban Development, Department of Education, Department of Justice, and the Department of Commerce. EPIC endorses the bill, and has been advocating for Algorithmic Transparency and Equity, specifically urging state, federal, and international governments to regulate harmful AI guided by the Universal Guidelines for AI. Last year, EPIC petitioned the FTC to establish a rule making regulating algorithmic tools in order to address discrimination.
  • FTC Settlement Over Tenant Screening Algorithm Lacks Safeguards, Redress for Victims » (Dec. 8, 2020)
    The Federal Trade Commission has reached a settlement with AppFolio which requires the company to fix its faulty and unlawful tenant screening algorithm—but which fails to compensate victims and lacks adequate safeguards to ensure AppFolio’s compliance. AppFolio included inaccurate information in tenant background reports in violation of the Fair Credit Reporting Act, which “directly resulted in qualified tenants being turned away from potential homes.” The settlement requires AppFolio to pay a $4.25 million fine, comply with FCRA in the future, and submit regular compliance paperwork to the FTC. But Commissioner Rohit Chopra dissented, arguing that the Commission should provide victims redress, impose stronger accountability measures, and refer the case to the Justice Department over possible housing discrimination. “Sloppy, inaccurate credit reporting practices are not mild inconveniences for American families,” Chopra wrote. “They can be deeply harmful, reinforcing discrimination and foreclosing opportunities for individuals to seek a better home, job, and life.” In February 2020, EPIC filed a complaint against Airbnb asking the FTC to investigate whether the company’s customer screening algorithm violates the Fair Credit Reporting Act.
  • California Voters Reject Proposition to Mandate Pretrial Risk Assessment Use » (Nov. 10, 2020)
    Proposition 25, which would have abolished cash bail in California but replaced it with the mandatory use of controversial pretrial risk assessment tools, was rejected by 56% of the state's voters earlier this month. Pretrial risk assessments attempt to predict the likelihood that a person will fail to appear at trial or be arrested again. Research has shown that these tools reflect and encode biases based on race, age, ethnicity, and socioeconomic status. Although pretrial risk assessments are widely used throughout the country and parts of California, Proposition 25 would have mandated their use in the state. EPIC recently published Liberty At Risk, a report on pretrial risk assessment tools, and maintains a resource on algorithms in the criminal justice system.
  • Court Blocks Rule That Would Okay Algorithmic Housing Decisions, Limit Discrimination Claims » (Oct. 29, 2020)
    A federal judge in Massachusetts has blocked a federal regulation that would have made it significantly harder to sue landlords and lenders for housing discrimination under the Fair Housing Act. The rule created a defense to any disparate impact claim in which a "predictive analysis" tool was used to make a housing decision, so long as that tool "accurately assessed risk" or was not "overly restrictive on a protected class." The court ruled that this regulation would "run the risk of effectively neutering disparate impact liability under the Fair Housing Act." In 2019, EPIC and others warned the federal housing agency that sanctioning the use of algorithms for housing decisions would exacerbate discrimination unless the agency imposed transparency, accountability, and data protection requirements. The Alliance for Housing Justice called the rule "a vague, ambiguous exemption for predictive models that appears to confuse the concepts of disparate impact and intentional discrimination." EPIC has called for greater accountability in the use of automated decision-making systems, including the adoption of the Universal Guidelines for Artificial Intelligence and requirements for algorithmic transparency.
  • Amazon Claims 'Halo' Device Will Monitor User's Voice for 'Emotional Well-Being' » (Sep. 1, 2020)
    Despite the exceptional privacy risks of biometric data collection and opaque, unproven algorithms, Amazon last week unveiled Halo, a wearable device that purports to measure "tone" and "emotional well-being" based on a user's voice. According to Amazon, the device "uses machine learning to analyze energy and positivity in a customer's voice so they can better understand how they may sound to others[.]" The device also monitors physical activity, assigns a sleep score, and can scan a user's body to estimate body fat percentage and weight. In recent years, Amazon has come under fire for its development of biased and inaccurate facial surveillance tools, its marketing of home surveillance camera Ring, and its controversial partnerships with law enforcement agencies. Last year, EPIC filed a Federal Trade Commission complaint against Hirevue, an AI hiring tool that claims to evaluate "cognitive ability," "psychological traits," and "emotional intelligence" based on videos of job candidates. EPIC has long advocated for algorithmic transparency and the adoption of the Universal Guidelines for AI.
  • EPIC Releases Report on Pretrial Risk Assessments » (Jul. 22, 2020)
    EPIC has released a report on Pretrial Risk Assessments. The report, Liberty at Risk: Pre-trial Risk Assessment Tools in the U.S., provides an overview of Risk Assessment Tools that practitioners and scholars can use to understand the nature of these systems, understand the broader context in which they are used, and help focus their evaluations of the fairness of these systems. EPIC hosted a panel on the topic on July 8, available to watch here. EPIC advocates for Algorithmic Transparency and maintains a resource on Algorithms in the Criminal Justice System.
  • Federal Appeals Court Sounds Alarm Over Predictive Policing » (Jul. 16, 2020)
    Judges on a federal appeals court took aim yesterday at predictive policing, the practice of using algorithmic analysis to predict crime and direct law enforcement resources. The Fourth Circuit ruled that Richmond police violated the Fourth Amendment when they stopped and searched the defendant, Billy Curry, simply because he was walking near the scene of a shooting. In a dissent, Judge J. Harvie Wilkinson called the court’s decision a “gut-punch to predictive policing.” But others on the court responded to highlight the dangers and failings of the practice. Chief Judge Roger Gregory questioned whether predictive policing is "a high-tech version of racial profiling.” Judge James A. Wynn highlighted the “devastating effects of over-policing on minority communities” and explained that predictive policing “results in the citizens of those communities being accorded fewer constitutional protections than citizens of other communities.” Judge Stephanie D. Thacker warned that “any computer program or algorithm is only as good as the data that goes into it” and that predictive policing “has been shown to be, at best, of questionable, effectiveness, and at worst, deeply flawed and infused with racial bias.” EPIC has long highlighted the risks of algorithms in the criminal justice system and recently obtained a 2014 Justice Department report detailing the dangers of predictive policing.
  • New York City Passes New Surveillance Transparency Law » (Jun. 19, 2020)
    Yesterday the New York City Council passed the Public Oversight of Surveillance Technology (POST) Act, a law that enables public oversight of surveillance technologies used by the New York Police Department. The POST Act will require the police to publish documents explaining their use of surveillance technologies, accept public comments about them, and provide a final surveillance impact and use policy to the public. EPIC has worked for years to focus public attention on the privacy impact of emerging surveillance technologies, and has pursued open government cases against the FBI and other law enforcement agencies to release information about cell site simulators and other surveillance technologies. EPIC has recently launched a project to track and review algorithms used in the criminal justice system.
  • EPIC Celebrates Sunshine Week with 2020 FOIA Gallery » (Mar. 16, 2020)
    In celebration of Sunshine Week, EPIC has unveiled the 2020 FOIA Gallery. Since 2001, EPIC has annually published highlights of EPIC's most significant open government cases. For example, last year EPIC filed the first lawsuit in the country for the public release of the Mueller Report. The federal court rebuked Attorney General Barr and agreed to review the complete Mueller Report to determine what additional material must be released. EPIC also prevailed in EPIC v. the Commission on AI. A federal court ruled that the Commission on Artificial Intelligence is subject to the FOIA. Following the court's decision, the AI Commission released documents about its activities to EPIC. In this year's FOIA gallery, EPIC also highlighted pre-trial risk assessment reports, documents about Justice Kavanaugh's role in the warrantless surveillance program, a DHS drone status report, the Census data transfer plan, and more than 29,000 complaints against Facebook pending at the FTC.
  • In FOIA Case, EPIC Obtains New Documents From AI Commission » (Mar. 4, 2020)
    EPIC has obtained a more documents from the National Security Commission on Artificial Intelligence. The records obtained by EPIC show that the AI Commission was aware of work on algorithmic transparency and AI bias. But the Commission's recent report to Congress did not endorse these recommendations, instead criticizing EU privacy law and calling for greater "government access to data on Americans." The Commission's disclosure follows a court ruling in EPIC v. AI Commission that the Commission is subject to the FOIA. Before issuing its report, the AI Commission held regular secret meetings with tech firms and defense contractors but did not gather opinions from the American public. EPIC is also litigating to enforce Commission's obligation to hold open meetings.
  • Poll: Americans Oppose Micro-Targeting in Online Political Ads » (Mar. 2, 2020)
    A new poll from Gallup and the Knight Foundation found that the majority of Americans do not want political campaigns to micro-target digital ads. Democrats (69%), independents (72%), and Republicans (75%) said that internet companies should not provide information about users to political campaigns for online advertisements. 59% said Internet companies should disclose who paid for political ads, how much they cost, and to whom the ads are targeted. EPIC Consumer Protection Counsel Christine Bannan testified at an FEC hearing in 2018 and urged the Commission to promulgate rules to mandate the source of online political ads, comparable to the rule for print and broadcast publications.
  • EU Hearing on AI in Criminal Justice Highlights Concerns » (Feb. 20, 2020)
    The European Parliament heard testimony today on AI in Criminal Law amidst a widespread push towards robust AI regulation in the EU. The panelists before the committee responsible for civil liberties, justice, and home affair focused on facial recognition, risk assessments, and predictive policing. The hearing explored regulation and law enforcement use, and also transparency, explainability, and accountability. The hearing in Parliament followed the release of a European Commission White Paper on AI. EPIC has called for a moratorium on face surveillance and maintains a resource about the use of risk assessments in the US Criminal Justice system.
  • Dutch Court Rules Secret Welfare Algorithm Violates Human Rights » (Feb. 5, 2020)
    A Dutch Court ruled that an algorithmic risk assessment technique that ostensibly detects fraud violates human rights and privacy laws. The SyRi system processed massive amounts of personal data held in a government agencies with an opaque algorithm. The Dutch court ruled "there is a risk that the use of SyRI will inadvertently make connections based on bias." EPIC tracks and publicizes the use of risk assessments in the US Criminal Justice System as well as advocates for the Universal Guidelines for AI to ensure Algorithmic Transparency in automated decision making, EPIC published the AI Policy Sourcebook, the first reference book on AI policy.
  • Documents Obtained by EPIC show Idaho's Use of Subjective Categories in Calculating Risk » (Dec. 11, 2019)
    In response to EPIC's Public Record Request, the Idaho Department of Correction released several documents about its risk assessment instrument, the "Level of Service Inventory-Revised" (LSI-R). Revealed in an annotated scoresheet that informs the LSI-R's calculation, the Idaho Department of Corrections uses several subjective categories to calculate an offender's risk and recidivism rate--including information about the alleged criminality of a defendant's social network, participation in leisurely activity, and mental health. EPIC also obtained a detailed scoring guide, LSI-R training materials, validation studies, and contract details. Only two validation studies were produced, and they were thirteen years apart. EPIC has obtained documents about pre-trial risk assessments as well as a scoring system developed by the DHS to assign risk assessments to travelers, including US citizens. EPIC has urged government agencies to make transparent algorithmic-based decision making.
  • Senators Demand Answers on Algorithmic Bias in Healthcare » (Dec. 4, 2019)
    Senators Cory Booker (D-NJ) and Ron Wyden (D-OR) sent letters to health insurance companies and two government agencies (the FTC and Centers for Medicare and Medicaid Services) asking how they're addressing bias in health care algorithms. The Senators wrote: "Unfortunately, both the people who design these complex systems, and the massive sets of data that are used, have many historical and human biases built in. Without very careful consideration, the algorithms they subsequently create can further perpetuate those very biases." Booker and Wyden recently introduced the Algorithmic Accountability Act, which would direct businesses to correct discriminatory algorithms. EPIC has promoted Algorithmic Transparency, supported the Universal Guidelines for AI, and published the first reference book on AI policy.
  • EPIC Obtains Documents about Nebraska's Flawed Risk Assessment Software » (Nov. 21, 2019)
    In response to EPIC's Freedom of Information Act request, the Nebraska Department of Correctional Services has provided to EPIC several documents about Nebraska's use of pre-trial risk assessments. Emails among state officials reveal concerns about the accuracy of the Vant4ge algorithm used for risk assessment. The head of the state agency wrote, "there has not been consistency in how the STRONG-R training is delivered" and "there are errors in how the 'severity index' of specific crimes is coded in the Vant4ge software" which "affect the final risk and needs score calculations produced by the assessment." According to the contract obtained by EPIC, Nebraska committed to continue with Vant4ge until 2022. EPIC previously pursued several lawsuits to obtain information about "predictive policing" and "future crime prediction" algorithms. EPIC obtained documents about pre-trial risk assessments as well as a scoring system developed by the DHS to assign risk assessments to travelers, including US citizens. EPIC has urged government agencies to make transparent algorithmic-based decision making.
  • Swiss Sign Convention 108+, 35 Countries Back Privacy Convention » (Nov. 21, 2019)
    This week, Switzerland signed the Modernized International Privacy Convention. With the Swiss signature thirty-five countries now back Convention 108+. The Council of Europe Convention 108+ is the first and only binding international legal instrument for data protection. Updated in 2018, the Modernized Convention includes new provisions on biometric data, algorithmic transparency, enhanced oversight. Non-members of the Council of Europe are able to sign the Convention, and EPIC and consumer groups have long urged the United States to ratify the international Privacy Convention.
  • At Council of Europe, EPIC's Rotenberg Urges Focus on AI and Human Rights » (Nov. 19, 2019)
    Speaking to the Council of Europe in Strasbourg, EPIC's Marc Rotenberg urged democratic nations to move forward a policy framework for AI that safeguards human rights. "You cannot afford to wait," said Mr. Rotenberg, describing the work of EPIC to establish algorithmic accountability. In the past few years, EPIC has promoted Algorithmic Transparency, supported the Universal Guidelines for AI, and published the first reference book on AI policy. EPIC has also challenged the secrecy of the US National Commission on AI and urged the recognition of AI policy frameworks to regulate the use of AI techniques.
  • Appeals Court Questions Government on Reliability of Google Scanning Algorithm » (Nov. 18, 2019)
    This week a federal appellate judge pressed the government about the reliability of a Google scanning algorithm that provided the basis for the warrantless search of a private email. EPIC raised concerns about the scanning technique in an amicus brief for the appeals court. In United States v. Wilson, EPIC argued that "because neither Google nor the Government explained how the image matching technique actually works or presented evidence establishing accuracy and reliability, the Government's search was unreasonable." Judge Watford told the government attorney that he "would like to hear your defense of the evidentiary record" because what we have "is this declaration from the Google person," and "I would need far more explanation of how reliable the hash matching technology is before I could validate this search." EPIC filed an amicus brief in a similar case in United States v. Miller. EPIC routinely submits amicus briefs on the privacy implications of new investigative techniques. EPIC has also long promoted algorithmic transparency to ensure accountability for AI-based decision making.
  • EPIC Files Complaint with FTC about Employment Screening Firm HireVue » (Nov. 6, 2019)
    Today, EPIC filed a complaint with the FTC alleging that recruiting company HireVue has committed unfair and deceptive practices in violation of the FTC Act. EPIC charged that HireVue falsely denies it uses facial recognition. EPIC also said the company failed to comply with baseline standards for AI decision-making, such as the OECD AI Principles and the Universal Guidelines for AI. The company purports to evaluate a job applicant's qualifications based upon their appearance by means of an opaque, proprietary algorithm. EPIC has brought many similar consumer privacy complaints to the FTC, including a complaint on Facebook's facial recognition practices that contributed to the FTC's 2019 settlement with Facebook. Last year EPIC also asked the FTC to investigate the Universal Tennis Rating system, a secret technique for scoring high school athletes.
  • Senators Propose Alternative to "Opaque Algorithms" » (Nov. 1, 2019)
    A bipartisan group of Senators has introduced legislation that would give users the option to engage with a platform without being manipulated by algorithms driven by user-specific data. The Filter Bubble Transparency Act, sponsored by Senators Thune (R-SD), Blumenthal (D-CT), Moran (R-Kan.), Blackburn (R-Tenn.) and Warner (D-Va.), would require large platforms to provide users with the option of a filter bubble-free view of the information they provide. "This legislation is about transparency and consumer control," said Senator Thune. EPIC board member Shoshana Zuboff said, "Filter bubbles divide and conquer. The Filter Bubble Transparency Act begins the work of breaking this manipulative and divisive cycle." However, the bill stops short of requiring Internet companies to reveal the algorithms used to manipulate users. EPIC first warned the Federal Trade Commission about the risk of opaque search algorithms in 2011. EPIC has since advocated for Algorithmic Transparency and urged adoption of the Universal Guidelines for AI. In a 2017 statement for the Senate Commerce Committee EPIC wrote, "It is becoming increasingly clear that Congress must regulate AI to ensure accountability and transparency."
  • Bill Introduced to Regulate Forensic Algorithms » (Sep. 23, 2019)
    U.S. Rep. Mark Takano (D-CA 41) has introduced the "Justice in Forensic Algorithms Act of 2019." The Act would create federal standards for the development and use of forensic algorithms as well as prohibit the use of trade secrets privileges to prevent defense access to evidence in criminal proceedings. The Computational Forensic Algorithm Standards include considerations of bias, accuracy, precision, and reproducibility, and makes "publicly available documentation by developers of computational forensic software of the purpose and function of the software, the development process, including source and description of training data, and internal testing methodology and results, including source and description of testing data." Earlier this year, Iowa passed a law regarding pre-trial risk assessment algorithms. EPIC has advocated for Algorithmic Transparency across all applications and urges the use of the Universal Guidelines for Artificial Intelligence to guide AI regulation. A new publication from EPIC — the AI Policy Sourcebook — includes major policy frameworks for artificial intelligence.
  • EPIC Advisory Board Member Anne Washington Testifies Before Congress » (Sep. 12, 2019)
    EPIC Advisory Board Member Professor Anne Washington today testified at a hearing on "The Future of Identity in Financial Services: Threats, Challenges, and Opportunities." Professor Washington said "Ignoring AI exceptions in financial services risks excluding many in our society because they are outliers from expectations...By baking privacy, security, and usability into the design of our AI systems, we can build a more responsible and ethical data environment." EPIC supports algorithmic transparency which would reduce bias and help ensure fairness in automated decisionmaking. EPIC proposed the Universal Guidelines for Artificial Intelligence as the basis for federal legislation. The Universal Guidelines have been endorsed by more than 250 experts and 60 organizations in 40 countries. EPIC has recently published the "AI Policy Sourcebook," containing the Universal Guidelines and other AI policy framework.
  • Facebook Faces More Civil Rights Lawsuits » (Aug. 20, 2019)
    A new lawsuit alleges that Facebook violated the Fair Housing Act by allowing advertisers to use factors such as race, sex, and disability to prevent home buyers and renters from seeing housing ads. Facebook recently settled claims and made changes to its advertising practices following lawsuits by the Department of Housing and Urban Development. EPIC is currently challenging the FTC's settlement with Facebook, arguing that it provides little benefit to Facebook users. EPIC also supports algorithmic transparency, which would reduce bias and help ensure fairness in automated decisionmaking. EPIC proposed the Universal Guidelines for Artificial Intelligence as the basis for federal legislation. The Universal Guidelines have been endorsed by more than 250 experts and 60 organizations in 40 countries.
  • In FOIA Appeal, EPIC Argues for Release of Predictive Analytics Report » (Jul. 19, 2019)
    EPIC has filed its opening brief in EPIC v. DOJ, a Freedom of Information Act case concerning predictive policing, algorithmic transparency, and executive privilege. EPIC’s case, now before the D.C. Circuit Court of Appeals, seeks the public release of a report on AI techniques in the criminal justice system. Last year, a lower court allowed the agency to assert the “presidential communications privilege” and withhold the report, but neither the D.C. Circuit nor the Supreme Court has ever permitted a federal agency to invoke that privilege. “The records sought in this [FOIA] case concern the use of predictive analytic techniques in the U.S. criminal justice system, a topic of vital public interest,” EPIC wrote. "But the questions presented on appeal have even broader significance for open government.” EPIC has pursued numerous FOIA cases concerning algorithmic transparency, passenger risk assessment, "future crime" prediction, and proprietary forensic analysis.
  • EPIC To Congress: Require Algorithmic Transparency For Dominant Internet Firms » (Jul. 17, 2019)
    For a hearing on "Google and Censorship through Search Engines," EPIC sent a statement to the Senate Judiciary Committee. EPIC said that "algorithmic transparency" could help establish fairness, transparency, and accountability for much of what users see online. In 2011, EPIC sent a letter to the FTC stating that Google's acquisition of YouTube led to a skewing of search results after Google substituted its secret "relevance" ranking for the original objective ranking, based on hits and ratings. The FTC took no action on EPIC's complaint. But the European Commission found that Google rigged search results to give preference to its own shopping service. The European Commission required Google to change its algorithm to rank its own shopping comparison the same way it ranks its competitors.
  • White House Explores Social Media "Bias" » (Jul. 11, 2019)
    The White House is today hosting a social media summit to examine allegations of bias and censorship. EPIC objected to an earlier White House survey on this topic, noting that the White House failed to protect the privacy of respondents. EPIC told the White House that "this data collection is unlawful, unconstitutional, and itself a violation of the First Amendment." The White House has since disabled the survey. To address concerns about bias, EPIC supports algorithmic transparency and has urged federal agencies and Congress to mandate algorithmic transparency. In 2007, EPIC explained to Congress that after Google acquired YouTube, Google substituted its own subjective algorithm based on "relevance" for objective criteria, such as number of hits and user ratings. The practical consequence was to elevate the rankings of Google's own web pages and to demote the ranking of other web pages, including EPIC's. Senator Josh Hawley (R-MO) recently introduced the "Ending Support for Internet Censorship Act," which would require tech companies to submit to an external audit that proves that their algorithms and content-removal practices are politically neutral.
  • Professor Strossen Testifies on Social Media and Censorship » (Jun. 28, 2019)
    EPIC Advisory Board member and New York Law School Professor Nadine Strossen testified this week before the House Homeland Security Committee for a hearing on "Examining Social Media Companies' Efforts To Counter Online Terror Content and Misinformation." Professor Strossen advocated for non-censorial strategies to countering terror content and misinformation on social media. EPIC has previously told Congress that "algorithmic transparency" could help establish fairness, transparency, and accountability for much of what users see online.
  • Senator Hawley Bill Would Mandate Algorithmic Transparency, Limit 230 Immunity » (Jun. 20, 2019)
    Senator Hawley (R-MO) has introduced the "Ending Support for Internet Censorship Act." The Act would require big tech companies to submit to an external audit that proves that their algorithms and content-removal practices are politically neutral. The bill would remove the immunity big tech companies receive under Section 230 of the Communications Decency Act if the FTC found that the algorithms and content-removal practices were not neutral. In 2007 EPIC explained to the Senate Judiciary Committee that after Google acquired YouTube, Google substituted its own subjective algorithm based on "relevance" for objective criteria, such as number of hits and user ratings. The practical consequence was to elevate the rankings of Google's own web pages and to demote the ranking of other web pages, including EPIC's. EPIC subsequently launched a campaign for algorithmic transparency and urged federal agencies and Congress to mandate algorithmic transparency.
  • EPIC Warns Appellate Court of Google’s Flawed, Secretive, Massive File Scanning Program » (Mar. 29, 2019)
    EPIC has filed an amicus brief in United States v. Wilson, a case concerning Google’s scanning of billions of personal files for suspected unlawful content, at the behest of the federal government. EPIC argued that “because neither Google nor the Government explained how the image matching technique actually works or presented evidence establishing accuracy and reliability, the Government’s search was unreasonable.” EPIC also explained that “the lower court made a key mistake” by confusing file hashing, which uniquely identifies a file, and image matching, which is prone to false positives. Last year, EPIC filed an amicus brief in a similar case, United States v. Miller. EPIC has promoted algorithmic transparency for many years. EPIC routinely submits amicus briefs on the application of the Fourth Amendment to investigative techniques.
  • Federal Government Charges Facebook with Housing Discrimination, Algorithmic Profiling at Issue » (Mar. 28, 2019)
    The Department of Housing and Urban Development has charged Facebook with violating the Fair Housing Act by enabling discrimination through user profiling on the advertising platform. “Facebook is discriminating against people based upon who they are and where they live,” said HUD Secretary Ben Carson. “Using a computer to limit a person’s housing choices can be just as discriminatory as slamming a door in someone’s face.” EPIC supports "algorithmic transparency,” which could reduce bias and help ensure fairness in automated decisionmaking. EPIC proposed the Universal Guidelines for Artificial Intelligence as the basis for federal legislation. The Universal Guidelines have been endorsed by more than 250 experts and 60 organizations in 40 countries. EPIC has pursued numerous FOIA cases concerning algorithmic transparency, passenger risk assessment, "future crime" prediction, and proprietary forensic analysis.
  • Over 40 Civil Rights, Civil Liberties, and Consumer Groups Call on Congress to Address Data-Driven Discrimination » (Feb. 13, 2019)
    EPIC joined 43 civil society organizations in a letter to Congress calling on legislators to protect civil rights, equity, and equal opportunity in the digital ecosystem. The organizations wrote that any privacy legislation must be consistent with the Civil Rights Principles for the Era of Big Data, which include: stop high-tech profiling, ensure fairness in automated decisions, preserve constitutional principles, enhance individual control of personal information, and protect people from inaccurate data. The groups said: "Platforms and other online services should not be permitted to use consumer data to discriminate against protected classes or deny them opportunities in commerce, housing, and employment, or full participation in our democracy." EPIC supports "algorithmic transparency", the public's right to know the data processes that impact their lives so they can contest decisions made by algorithms.
  • EPIC To Congress: Require Algorithmic Transparency For Google, Dominant Internet Firms » (Dec. 10, 2018)
    EPIC has sent a statement to the House Judiciary Committee in advance of a hearing on Google's business practices. EPIC said that "algorithmic transparency" should be required for Internet firms. EPIC explained that Google's acquisition of YouTube led to a skewing of search results after Google substituted its secret "relevance" ranking for the original objective ranking, based on hits and ratings. EPIC pointed out that Google's algorithm preferences YouTube's web pages over EPIC's in searches for videos concerning "privacy." Last year the European Commission found that Google rigged search results to preference its own online service. The Commission required Google to change its algorithm to rank its own shopping comparison the same way it ranks its competitors. The US Federal Trade Commission has failed to take similar action, after even receiving substantial complaints. EPIC also urged Congress to consider the Universal Guidelines for AI as a basis for federal legislation.
  • EPIC to Senators: Universal Guidelines for Artificial Intelligence Are a Model Policy » (Nov. 30, 2018)
    In a statement to a Senate committee focused on technology and privacy, EPIC urged Senators to implement the Universal Guidelines for Artificial Intelligence in US law. The Guidelines maximize the benefits of AI, minimize the risk, and ensure the protection of human rights. More than 200 experts and 50 organizations, including the American Association for the Advancement of Science, have endorsed the Universal Guidelines. EPIC also expressed concern about the secrecy surrounding the Senate workshops on AI. In a petition earlier this year, EPIC and leading scientific organizations, including AAAS, ACM and IEEE, and nearly 100 experts urged the White House to solicit public comments on AI policy. EPIC told the Senate committee that the Senate must also ensure a public process for developing AI policy. EPIC has pursued several criminal justice FOIA cases, and FTC consumer complaints to promote transparency and accountability for AI decisionmaking. In 2015, EPIC launched an international campaign for Algorithmic Transparency.
  • EPIC Files Amicus in Case Concerning Government Searches and Google's Email Screening Practices » (Oct. 18, 2018)
    EPIC has filed an amicus brief with the U.S. Court of Appeals for the Sixth Circuit in United States v. Miller, arguing that the Government must prove the reliability of Google email screening technique. The lower court held that law enforcement could search any images that Google's algorithm had flagged as apparent child pornography. EPIC explained that a search is unreasonable when the government cannot establish the reliability of the technique. EPIC also warned that the government could use this technique "to determine if files contain religious viewpoints, political opinions, or banned books." EPIC has promoted algorithmic transparency for many years. EPIC routinely submits amicus briefs on the application of the Fourth Amendment to investigative techniques. EPIC previously urged the government to prove the reliability of investigative techniques in Florida v. Harris.
  • EPIC Files Appeal with D.C. Circuit, Seeks Release of 'Predictive Analytics Report' » (Oct. 12, 2018)
    EPIC has appealed a federal district court decision for the release of a "Predictive Analytics Report." The district court backed the Department of Justice when the agency claimed the "presidential communications privilege." But neither the D.C. Circuit Court of Appeals nor the Supreme Court has ever permitted a federal agency to invoke that privilege in a FOIA case. EPIC sued the agency in 2017 to obtain records about "risk assessment" tools in the criminal justice system. These controversial techniques are used to set bail, determine criminal sentences, and even contribute to determinations about guilt or innocence. EPIC has pursued numerous FOIA cases concerning algorithmic transparency, passenger risk assessment, "future crime" prediction, and proprietary forensic analysis. The D.C. Circuit will likely hear EPIC's appeal next year.
  • International Privacy Convention Open for Signature » (Oct. 11, 2018)
    The Council of Europe has opened for signature updates to Convention 108, the international Privacy Convention. Among other changes, the modernized Convention requires prompt data breach notification, establishes national supervisory authorities to ensure compliance, permits transfers abroad only when personal data is sufficiently protected, and provides new user rights, including algorithmic transparency. Twenty-one nations have signed the treaty. Many more are expected to sign. EPIC and consumer coalitions have urged the United States to ratify the international Privacy Convention. The complete text of the modernized Convention will be available in the 2018 edition of the Privacy Law Sourcebook, available at the EPIC Bookstore.
  • California Bans Anonymous Bots, Regulates Internet of Things » (Oct. 2, 2018)
    California Governor Jerry Brown recently signed two modern privacy laws, including a first in the nation law governing the security of the Internet of Things. SB327 sets baseline security standards for IoT devices. EPIC recently submitted comments to the Consumer Product Safety Commission recommending similar action. Governor Brown also signed a bill banning anonymous bots. The law makes it illegal to use a bot, or automated account, to mislead California residents or communicate without disclosing the identity of the actual operator. EPIC President Marc Rotenberg had earlier proposed that Asimov's Laws of Robotics be updated to require that robots reveal the basis of their decisions (Algorithmic Transparency) and that robots reveal their actual identity.
  • EPIC To Congress: Public Participation Required for US Policy on Artificial Intelligence » (Aug. 21, 2018)
    In advance of a hearing concerning the Office of Science and Technology Policy, EPIC said that OSTP should ensure public participation in the development of AI policy. EPIC told the Senate Commerce Committee that Congress must also implement oversight mechanisms for the use of AI. EPIC said that Congress should require algorithmic transparency, particularly for government systems that involve the processing of personal data. In a recent petition to OSTP, EPIC, leading scientific organizations, including AAAS, ACM and IEEE, and nearly 100 experts urged the White House to solicit public comments on artificial intelligence policy. EPIC has pursued several criminal justice FOIA cases, and FTC consumer complaints to promote transparency and accountability. In 2015, EPIC launched an international campaign for Algorithmic Transparency.
  • EPIC to FTC: Algorithmic Decision-Making Requires Transparency » (Aug. 21, 2018)
    EPIC has advised the FTC on algorithmic decision tools, artificial intelligence, and predictive analytics for the hearings on "Competition and Consumer Protection in the 21st Century." In the comments, EPIC urged the FTC to (1) prohibit unfair and deceptive algorithms, (2) seek legislative authority for "algorithmic transparency" to establish consumer protection in automated decision-making, (3) provide guidance on the ethical design and implementation of algorithms, and (4) make public the "Universal Tennis Rating" algorithm that secretly scores young athletes. Calling on the Commission to act on EPIC's repeated complaints on the proprietary algorithm that poses risks to children's privacy, EPIC said: "secret algorithms are unfair and deceptive," conceal bias, and deprive consumers of opportunities in the marketplace. EPIC champions "Algorithmic Transparency", and has advised Congress that algorithmic transparency is necessary for fairness and accountability.
  • Court Blocks EPIC's Efforts to Obtain "Predictive Analytics Report" » (Aug. 16, 2018)
    A federal court in the District of Columbia has blocked EPIC's efforts to obtain a secret "Predictive Analytics Report" in a FOIA case against the Department of Justice. The court sided with the agency which had withheld the report and asserted the "Presidential communications privilege." Neither the Supreme Court nor the D.C. Circuit has ever permitted a federal agency to invoke that privilege in a FOIA case. EPIC sued the agency in 2017 to obtain records about "risk assessment" tools in the criminal justice system. These techniques are used to set bail, determine criminal sentences, and even contribute to determinations about guilt or innocence. Many criminal justice experts oppose their use. EPIC has pursued several FOIA cases concerning "algorithmic transparency," passenger risk assessment, "future crime" prediction, and proprietary forensic analysis. The case is EPIC v. DOJ (Aug. 14, 2018 D.D.C.). EPIC is considering an appeal.
  • Bot Disclosure Act Would Promote Identification, Accountability » (Jul. 19, 2018)
    Sen. Dianne Feinstein (D-Calif.) has introduced S. 3127, the Bot Disclosure and Accountability Act of 2018. The bill directs the FTC to create a rule to require social media companies to disclose any social media bots on their platform. The bill also prohibits candidates and political parties from using bots. "This bill is designed to help respond to Russia's efforts to interfere in U.S. elections through the use of social media bots, which spread divisive propaganda," Feinstein said. Earlier this week, EPIC sent a statement to the House Judiciary Committee arguing that "algorithmic transparency" could help establish fairness, transparency, and accountability for much of what users see online. EPIC has also recommended identification requirements for drones.
  • EPIC To Congress: Require Algorithmic Transparency For Dominant Internet Firms » (Jul. 16, 2018)
    In advance of a hearing on Filtering Practices of Social Media Companies, EPIC has sent a statement to the House Judiciary Committee. EPIC said that "algorithmic transparency" could help establish fairness, transparency, and accountability for much of what users see online. In 2011, EPIC sent a letter to the FTC stating that Google's acquisition of YouTube led to a skewing of search results after Google substituted its secret "relevance" ranking for the original objective ranking, based on hits and ratings. The FTC took no action on EPIC's complaint. But last year, after a seven year investigation, the European Commission found that Google rigged search results to give preference to its own shopping service. The Commission required Google to change its algorithm to rank its own shopping comparison the same way it ranks its competitors.
  • EPIC Testifies at FEC Hearing on Online Political Ads, Urges Greater Transparency » (Jun. 27, 2018)
    The Federal Election Commission is holding a two day hearing to hear expert testimony on the agency's proposed rule governing disclosures for political ads on the Internet. Christine Bannan, EPIC Administrative Law and Policy Fellow, will testify on the second day of the hearing. EPIC submitted multiple comments to the FEC urging the agency to promulgate rules that would require online political ads to disclose funders as is required for traditional media ads. EPIC proposed the FEC adopt "algorithmic transparency" procedures that would require advertisers to disclose the demographic factors behind targeted political ads, as well as the source and payment, and maintain a public directory of advertiser data. EPIC's Project on Democracy and Cybersecurity, established after the 2016 presidential election, seeks to safeguard democratic institutions from various forms of cyber attack.
  • EPIC Calls on FEC to Pass Stronger Transparency Rules for Political Ads » (May. 24, 2018)
    EPIC submitted comments on the Federal Election Commission's (FEC) proposed rules for political ads on the internet. The FEC proposed two alternative rules, one which would hold internet companies to the same standard as traditional media companies and one which would make exceptions for online ads. EPIC stated: "FEC rules should be technology-neutral and consistent across media platforms." EPIC also recommended that the FEC adopt algorithmic transparency rules, which would require advertisers to disclose the demographic factors behind targeted political ads, as well as the source and payment, and maintain a public directory of advertiser data. EPIC's Project on Democracy and Cybersecurity, established after the 2016 presidential election, seeks to safeguard democratic institutions from various forms of cyber attack.
  • EPIC Renews Call For FTC To Stop Secret Scoring of Young Athletes » (May. 23, 2018)
    EPIC has urged the Federal Trade Commission to act on a Complaint EPIC previously filed with the FTC about the secret scoring of young tennis players. The EPIC complaint concerns the "Universal Tennis Rating," a proprietary algorithm used to assign numeric scores to tennis players, many of whom are children under 13. According to EPIC, "the UTR score defines the status of young athletes in all tennis-related activity; impacts opportunities for scholarship, education and employment; and may in the future provide the basis for 'social scoring' and government rating of citizens." EPIC pointed to objective, provable, and transparent rating systems such as ELO as far preferable. EPIC has championed "Algorithmic Transparency" as a fundamental human right. Earlier this month, the Council of Europe adopted the modernized Privacy Convention that establishes a legal right for individuals to obtain "knowledge of the reasoning" for the processing of personal data.
  • White House Establishes AI Advisory Committee » (May. 10, 2018)
    The White House has established the "Select Committee on Artificial Intelligence" to advise the President and coordinate AI policies among executive branch agencies. The Office of Science and Technology Policy, NSF, and DARPA will lead the interagency committee. According to the White House, the goals of the Committee are (1) prioritize funding for AI research and development; (2) remove barriers to AI innovation; (3) train the future American workforce; (4) achieve strategic military advantage; (5) leverage AI for government services; and (6) lead international AI negotiations. The Committee will also coordinate efforts across federal agencies to research and adopt technologies such as autonomous systems, biometric identification, computerized image and video analysis, machine learning and robotics. It is unclear whether the Committee will include public perspectives in its work. In 2014, EPIC, joined by 24 consumer privacy, public interest, scientific, and educational organizations petitioned the OSTP to accept public comments on a White House project concerning Big Data. The petition stated, "The public should be given the opportunity to contribute to the OSTP's review of 'Big Data and the Future of Privacy' since it is their information that is being collected and their privacy and their future that is at stake." In 2015 EPIC launched an international campaign for Algorithmic Transparency and recently urged Congress to establish oversight mechanisms for the use of AI by federal agencies.
  • EPIC Urges Congress to Require Algorithmic Transparency For Dominant Internet Firms » (Apr. 25, 2018)
    In advance of a hearing on Filtering Practices of Social Media Companies, EPIC has sent a statement to the House Judiciary Committee. EPIC said that "algorithmic transparency" could help establish fairness, transparency, and accountability for much of what users see online. In 2011, EPIC sent a letter to the FTC stating that Google's acquisition of YouTube led to a skewing of search results after Google substituted its secret "relevance" ranking for the original objective ranking, based on hits and ratings. The FTC took no action on EPIC's complaint. But last year, after a seven year investigation, the European Commission found that Google rigged search results to give preference to its own shopping service. The Commission required Google to change its algorithm to rank its own shopping comparison the same way it ranks its competitors.
  • EPIC Tells House Committee: Require Transparency for Government Use of AI » (Apr. 19, 2018)
    In advance of a hearing on "Game Changers: Artificial Intelligence Part III, Artificial Intelligence and Public Policy," EPIC told the House Oversight Committee that Congress must implement oversight mechanisms for the use of AI by federal agencies. EPIC said that Congress should require algorithmic transparency, particularly for government systems that involve the processing of personal data. EPIC also said that Congress should amend the E-Government Act to require disclosure of the logic of algorithms that profile individuals. EPIC made similar comments to the UK Privacy Commissioner on issues facing the EU under the GDPR. A recent GAO report explored challenges with AI, including the risk that machine-learning algorithms may not comply with legal requirements or ethical norms. EPIC has pursued several criminal justice FOIA cases, and FTC consumer complaints to promote transparency and accountability. In 2015, EPIC launched an international campaign for Algorithmic Transparency.
  • EPIC to UK Privacy Commissioner: Data Protection Assessments Require Algorithmic Transparency » (Apr. 13, 2018)
    EPIC has submitted extensive comments on proposed guidance for Data Protection Impact Assessments. The new European Union privacy law - the "GDPR" — requires organizations to carefully assess the collection and use of personal data. In comments to UK privacy commissioner, EPIC said that disclosure of the technique for decision making is a core requirement for Data Protection Impact Assessments. EPIC supports "Algorithmic Transparency". EPIC has pursued criminal justice FOIA cases, and FTC consumer consumer complaints to promote transparency and accountability. EPIC has warned Congress of the risks of "citizen scoring."
  • Congress Launches Caucus on Artificial Intelligence » (Apr. 3, 2018)
    Congressional leaders have announced the establishment of the Congressional Artificial Intelligence Caucus. The Caucus will bring together experts from academics, government, and the private sector to inform policymakers of the technological, economic and social impacts of advances in AI. The Congressional AI Caucus is bipartisan and co-chaired by Congressmen John Delaney (D-MD) and Pete Olson (R-TX). This is one of several initiatives in Congress to pursue AI policy objectives. Rep. Delaney introduced the FUTURE of Artificial Intelligence Act (H.R. 4625) and Rep. Elise Stefanik (R-NY) introduced a bill (H.R. 5356) that would create the National Security Commission on AI. In 2015, EPIC launched an international campaign for Algorithmic Transparency. EPIC has also warned Congress about the growing of opaque and unaccountable techniques in automated decision-making.
  • French President: Algorithmic Transparency Key to National AI Strategy » (Apr. 2, 2018)
    French President Emmanuel Macron has expressed support for "Algorithmic transparency" as a core democratic principle. In an interview with Wired magazine, President Macron said that algorithms deployed by the French government and companies that receive public funding will be open and transparent. President Macron emphasized, "I have to be confident for my people that there is no bias, at least no unfair bias, in this algorithm." President Macron's statement echoed similar comments in 2016 by German Chancellor Angela Merkel, "These algorithms, when they are not transparent, can lead to a distortion of our perception, they narrow our breadth of information." EPIC has a longstanding campaign to promote transparency and to end secret profiling. At UNESCO headquarters in 2015, EPIC said that algorithmic transparency should be a fundamental human right. In recent comments to UNESCO, EPIC highlighted the risk of secret profiling, content filtering, the skewing of search results, and adverse decision-making, based on opaque algorithms.
  • EPIC to UNESCO: Algorithmic Transparency is an Internet Universality Indicator » (Mar. 16, 2018)
    EPIC has provided comments to UNESCO on a proposed framework for Internet Universality Indicators. The UNESCO framework emphasizes Rights, Openness, Accessibility, and Multistakeholder participation. UNESCO said that the framework will help guide protections for fundamental rights. EPIC also proposed "Algorithmic Transparency" as a key indicator of Internet Universality. EPIC highlighted the risk of secret profiling, content filtering, the skewing of search results, and adverse decisionmaking, based on opaque algorithms. EPIC has worked closely with UNESCO for over 20 years on Internet policy issues. At UNESCO headquarters in 2015, EPIC said that algorithmic transparency should be a fundamental human right.
  • Senators Question Intelligence Officials on Russian Election Interference » (Feb. 13, 2018)
    The Senate Intelligence Committee held a hearing today with top officials from all U.S. intelligence agencies: Office of the Director of National Intelligence, CIA, NSA, Defense Intelligence Agency, FBI, and the National Geospatial-Intelligence Agency. The officials unanimously agreed that Russia interfered in the 2016 election and will interfere in the 2018 election, noting that they have already observed attempts to influence upcoming elections. Director of National Intelligence Dan Coats said: "There should be no doubt that Russia perceived that its past efforts as successful and views the 2018 U.S. midterm elections as a potential target for Russian influence operations." EPIC launched the Project on Democracy and Cybersecurity, after the 2016 presidential election, to safeguard democratic institutions. EPIC is currently pursuing several FOIA cases concerning Russian interference, including EPIC v. FBI (cyberattack victim notification), EPIC v. ODNI (Russian hacking), EPIC v. IRS (release of Trump's tax returns), and EPIC v. DHS (election cybersecurity). EPIC also provided comments to the Federal Election Commission to improve transparency of election advertising on social media.
  • NYC Establishes Algorithm Accountability Task Force » (Dec. 21, 2017)
    New York City has passed the first bill to examine the discriminatory impacts of "automated decision systems." A task force will develop recommendations for how to make the city's algorithms fairer and more transparent. James Vacca, the bill's sponsor, said "If we're going to be governed by machines and algorithms and data, well, they better be transparent." EPIC supports algorithmic transparency and opposed systemic bias in "risk assessment" tools used in the criminal justice system. EPIC has filed Freedom of Information lawsuits to obtain information about "predictive policing" and "future crime prediction" algorithms. EPIC President Marc Rotenberg has called for laws that mandate algorithmic transparency and prohibit automated decision-making that results in discrimination.
  • EPIC FOIA: Justice Department Admits Algorithmic Sentencing Report Doesn't Exist » (Dec. 15, 2017)
    The Justice Department, in response to an EPIC FOIA lawsuit, has admitted that the United States Sentencing Commission never produced an evaluation of "risk assessment" tools in criminal sentencing. In 2014, Attorney General Eric Holder expressed concern about bias in criminal sentencing "risk assessments" and called on the Sentencing Commission to study the problem and produce a report. But after EPIC requested that study and sued the DOJ to obtain it, the DOJ conceded that the report was never produced. EPIC did obtain emails confirming the existence of a 2014 DOJ report about "predictive policing" algorithms, but the agency also withheld that report. "Risk assessments" are secret techniques used to set bail, to determine criminal sentences, and even make decisions about guilt or innocence. EPIC has pursued several FOIA cases to promote "algorithmic transparency", including cases on passenger risk assessment, "future crime" prediction, and proprietary forensic analysis.
  • Support for Bills Establishing Oversight of AI Grows in Congress » (Dec. 12, 2017)
    Senators Maria Cantwell (D-WA) and Brian Schatz (D-HI) are planning legislation to establish new oversight committees for the use of AI. Cantwell's bill—Future of Artificial Intelligence Act of 2017—is cosponsored by Senators Ed Markey (D-MA) and Todd Young (R-IN) and would establish an AI committee at the Commerce Department. A companion bill in the House is sponsored by Representatives John Delaney (D-MD) and Pete Olson (R-TX), co-chairs of the Artificial Intelligence Caucus. Schatz has announced his intent to introduce a bill creating an independent AI commission. In 2015, EPIC launched an international campaign in support of Algorithmic Transparency and has warned Congress about the use of opaque technique in automated decision-making.
  • EPIC Urges Congress to Regulate AI Techniques, Promotes 'Algorithmic Transparency' » (Dec. 12, 2017)
    In advance of a hearing on "Digital Decision-Making: The Building Blocks of Machine Learning and Artificial Intelligence," EPIC warned a Senate committee that many organizations now make decisions based on opaque techniques they don't understand. EPIC told Congress that algorithmic transparency is critical for democratic accountability. In 2015, EPIC launched an international a campaign in support of Algorithmic Transparency. At a speech to UNESCO in 2015, EPIC President Marc Rotenberg called knowledge of the algorithm "a fundamental human right." Earlier this year, EPIC filed a complaint with the FTC that challenged the secret scoring of athletes by Universal Tennis. EPIC said to the FTC that it "seeks to ensure that all rating systems concerning individuals are open, transparent and accountable."
  • EPIC Promotes 'Algorithmic Transparency,' Urges Congress to Regulate AI Techniques » (Nov. 28, 2017)
    In advance of a hearing on "Algorithms: How Companies' Decisions About Data and Content Impact Consumers," EPIC warned a Congressional committee that many organizations now make decisions based on opaque techniques they don't understand. EPIC told Congress that algorithmic transparency is critical for democratic accountability. In 2015, EPIC launched an international a campaign in support of Algorithmic Transparency. At a speech to UNESCO in 2015, EPIC President Marc Rotenberg called knowledge of the algorithm "a fundamental human right." Earlier this year, EPIC filed a complaint with the FTC that challenged the secret scoring of athletes by Universal Tennis. EPIC said to the FTC that it "seeks to ensure that all rating systems concerning individuals are open, transparent and accountable."
  • After Public Pressure, FEC To Begin Rulemaking On Online Ad Transparency » (Nov. 16, 2017)
    After receiving over 150,000 public comments, the Federal Election Commission voted unanimously to make new rules governing online political ad disclosures. EPIC, numerous other organizations, and lawmakers pressed the FEC to require transparency for online ads to combat foreign interference in U.S. elections. The FEC had solicited public comments on its internet disclosure rules three times in six years before finally taking action. A group of 15 Senators wrote, "The FEC must close loopholes that have allowed foreign adversaries to sow discord and misinform the American electorate." And a group of 18 members of Congress urged the FEC to "address head-on the topic of illicit foreign activity in U.S. elections." EPIC suggested the FEC go a step beyond simple disclosures and require "algorithmic transparency" for online platforms that deliver targeted ads to voters. Several senators have also introduced a bipartisan bill that would require the same disclosures for online ads as for television and radio. EPIC is fully engaged in protecting the integrity of elections with its Project on Democracy and Cybersecurity.
  • EPIC, Coalition Oppose Government's 'Extreme Vetting' Proposal » (Nov. 16, 2017)
    EPIC and a coalition of civil rights organizations have sent a letter to the Acting Secretary of Homeland Security strongly opposing the Extreme Vetting Initiative. A similar letter was sent by technical experts. The government's 'Extreme Vetting' initiative uses opaque procedures, secret profiles, and obscure data including social media post, to review visa applicants and make final determinations. EPIC has warned against both the government's use of social media data and secret algorithms to profile individuals for decision making purposes. EPIC is also pursuing a FOIA request for details on the relationship between the Immigration and Customs Enforcement agency and Palantir, a company that provides software to analyze large amounts of data.
  • Consumer Bureau Proposes Policy Guidance for Data Aggregation Services » (Nov. 16, 2017)
    The Consumer Financial Protection Bureau recently set out guidance for financial services that aggregate consumer data. The Bureau outlined Consumer Protection Principles that "express the Bureau's vision for realizing a robust, safe, and workable data aggregation market that gives consumers protection, usefulness, and value." The Consumer Protection Principles for aggregated consumer data services are: (1) consumer access to information, (2) usability and limited scope of access by third parties, (3) consumer control and informed consent, (4) authorizing payments, (5) security (6) access transparency, (7) accuracy, (8) ability to dispute and resolve unauthorized access, and (9) efficient and effective accountability mechanisms. EPIC has urged Congress to establish privacy and data security standards for consumer services and has championed algorithmic transparency. In testimony before Congress, EPIC Board member Professor Frank Pasquale explained that the use of secret algorithms often have adverse consequences for consumers.
  • Senators Urge FEC to Promote Transparency in Online Ads » (Nov. 13, 2017)
    A group of 15 Senators led by Mark Warner (D-VA), Amy Klobuchar, (D-MN) and Claire McCaskell, (D-MO) have urged the Federal Election Commission to improve transparency for online political ads. The Senators stated that, "the FEC can and should take immediate and decisive action to ensure parity between ads seen on the internet and those on television and radio." The Senators emphasized how "Russian operatives used advertisements on social media platforms to sow division and discord" during the 2016 election. EPIC provided comments to the FEC calling for "algorithmic transparency" and the disclosure of who paid for online ads. Senators Klobuchar, Warner, and McCain (R-AZ) have also introduced a bipartisan bill that would require the same disclosures for online political advertisements as for those on television and radio. EPIC's Project on Democracy and Cybersecurity, established after the 2016 presidential election, seeks to promote election integrity and safeguard democratic institutions from various forms of cyber attack.
  • EPIC Promotes 'Algorithmic Transparency' for Political Ads » (Nov. 3, 2017)
    In comments to the Federal Election Commission, EPIC urged new rules to require transparency for online political ads. EPIC said voters should "know as much about advertisers as advertisers know about voters." EPIC called for algorithmic transparency which would require advertisers to disclose the demographic factors behind targeted political ads, as well as the source and payment. The FEC reopened a comment period on proposed rules "in light of developments." This week representatives from Facebook, Twitter and Google testified at two Senate hearings on the role that social media played in Russian meddling in the 2016 election. Senators Klobuchar (D-MN), Warner (D-VA), and McCain (R-AZ) have also introduced a bipartisan bill that would require increased disclosures for online political advertisements. EPIC's Project on Democracy and Cybersecurity, established after the 2016 presidential election, seeks to safeguard democratic institutions from various forms of cyber attack.
  • EPIC FOIA: EPIC Uncovers Report on "Predictive Policing" but DOJ Blocks Release » (Nov. 1, 2017)
    EPIC has just received new documents in a FOIA case against the Department of Justice, however the agency is refusing to release reports about the use of "risk assessment" tools in the criminal justice system. In 2014, the Attorney General called on the U.S. Sentencing Commission to review the use of "risk assessments" in criminal sentencing, expressing the concern about potential bias. EPIC requested that document and filed suit against the DOJ to obtain it, but the agency failed to release the report by a court-ordered deadline. EPIC did obtain emails confirming the existence of a 2014 DOJ report about "predictive policing" algorithms, but the agency also withheld that report. "Risk assessments" are secret techniques used to set bail, to determine criminal sentences, and even decide guilt or innocence. EPIC has pursued several FOIA cases to promote algorithmic transparency, including cases on passenger risk assessment, "future crime" prediction, and proprietary forensic analysis.
  • At OECD, EPIC Renews Call for Algorithmic Transparency » (Oct. 27, 2017)
    Speaking at the OECD conference "Intelligent Machines, Smart Policies," EPIC President Marc Rotenberg urged support for Algorithmic Transparency. "We must establish this principle of accountability as the cornerstone of AI policy," said Mr. Rotenberg. Rotenberg spoke in support of Algorithmic Transparency at the 2014 OECD Global Forum for the Knowledge Economy in Tokyo. EPIC is now working with OECD member states, NGOs, business groups, and technology exports on the development of an AI policy framework, similar to earlier OECD policy frameworks on privacy, cryptography, and critical infrastructure protection.
  • Mattel Cancels "Aristotle," an Internet Device that Targeted Children » (Oct. 5, 2017)
    Mattel will scrap its plans to sell Aristotle, an Amazon Echo-type device that collects and stores data from young children. The Campaign for a Commercial-Free Childhood sent a letter and 15,000 petition signatures to the toymaker, warning of privacy and childhood development concerns. CFCC said that "young children shouldn't be encouraged to form bonds and friendships with data-collecting devices." Senator Markey (D-MA) and Representative Barton (R-TX) also chimed in, demanding to know how Mattel would protect families' privacy. EPIC backed the CFCC campaign and urged the FTC in 2015 to regulate "always-on" Internet devices. A pending EPIC complaint at the FTC concerns the secret scoring of young athletes.
  • NGOs to Meet with Privacy Commissioners at Public Voice Event in Hong Kong » (Sep. 19, 2017)
    The Public Voice will host an event with NGOs and Privacy Commissioners at the 39th International Conference of Data Protection and Privacy Commissioners in Hong Kong. "Emerging Privacy Issues: A Dialogue Between NGOs & DPAs" will address emerging privacy issues, including biometric identification, Algorithmic transparency, border surveillance, the India privacy decision, and implementation of the GDPR. Speakers include Chairman Isabelle Falque-Pterrotin of the CNIL and Article 29 Working Party, Commissioner John Edwards of New Zealand, and Director Eduardo Bertoni of Argentina. Also participating will be representatives of Access Now, EPIC, GP Digital, Privacy International, and the World Privacy Forum. The Public Voice, established in 1996, facilitates public participation in decisions concerning the future of the Internet.
  • EPIC Urges Senate To Establish Data Protection Standards For Financial Technologies » (Sep. 11, 2017)
    In advance of a hearing on financial technology, EPIC recommended that the Senate Committee establish privacy standards for financial companies that use social media and secret algorithms to make determinations about consumers. In light of the recent Equifax breach, EPIC proposed that the Committee make privacy and security its top priorities. Earlier this year, EPIC submitted a similar statement to the House Committee on Energy and Commerce. EPIC also recently filed a complaint with the CFPB regarding "starter interrupt devices" deployed by auto lenders to remotely disable cars when individuals are late on their payments. Testimony of Professor Frank Pasquale on "Exploring the Fintech Landscape."
  • EPIC FOIA: EPIC Seeks Details of ICE, Palantir Deal » (Aug. 15, 2017)
    EPIC has submitted a Freedom of Information Act request to Immigration and Customs Enforcement seeking details of the agency's relationship with Palantir. The federal agency contracted with the Peter Thiel company to establish vast databases of personal information, and develop new capabilities for searching, tracking, and profiling. EPIC is seeking the ICE contracts with Palantir, as well as training materials, reports, analysis, and other documents. The ICE Investigative Case Management System and the FALCON system now connect personal data across federal government, oftentimes in violation of the federal Privacy Act. The Intercept reported that FALCON "will eventually give agents access to more than 4 billion 'individual data records.'" In FOIA lawsuit EPIC v. CBP, EPIC uncovered Planter's role in Analytical Framework for Intelligence, a program that assigns "risk assessment" scores to travelers. EPIC continues to advocate for greater transparency in computer-based decision making.
  • Supreme Court Won't Review Ruling on Secretive Sentencing Algorithms » (Jun. 26, 2017)
    The Supreme Court has declined to review the ruling of a state court that upheld the use of a secret algorithm to determine a criminal sentence. The petitioner Loomis argued that he was not able to assess the fairness or accuracy of the legal judgement, and that the secret "risk assessment" algorithm therefore violated fundamental Due Process right. EPIC has pursued several related cases to establish the principle of algorithmic transparency in the United States. In EPIC v. DHS, EPIC obtained documents about secret behavioral algorithms that purportedly determine an individual's likelihood of committing a crime. In a series of state FOI cases, EPIC obtained records from state agencies about the use of propriety DNA analysis tools to determine guilt or innocence. EPIC is currently litigating EPIC v. CBP before the DC Circuit Court of Appeals, a case concerning the secret scoring of airline passengers by the federal government.
  • Court Rules Secret Scoring of Teachers Unconstitutional » (Jun. 13, 2017)
    A federal district court has held that firing public school teachers based on the results of a secret algorithm is unconstitutional. The case, Houston Federation of Teachers vs. Houston Independent School District, concerned a commercial software company's proprietary appraisal system that was used to score teachers. Teachers could not correct their scores, independently reproduce their scores, or learn more than basic information about how the algorithm worked. "When a public agency adopts a policy of making high stakes employment decisions based on secret algorithms incompatible with minimum due process, the proper remedy is to overturn the policy," the court wrote. EPIC recently filed a complaint asking the FTC to stop the secret scoring of young tennis players. EPIC has pursued several cases on "Algorithmic Transparency," including one for rating travelers and another for assessing guilt or innocence.
  • EPIC to Congress: Data Protection Needed for Financial Technologies » (Jun. 9, 2017)
    EPIC submitted a statement to a House Committee hearing on financial technologies on the risks with new financial services. Companies now use social media data and secret algorithms to make determinations about consumers. They are also reaching out, through the "Internet of Things," to control consumers. EPIC's recently filed a complaint with the CFPB about "starter interrupt devices," deployed by auto lenders to remotely disable cars when individuals are late on their payments.
  • EPIC Asks FTC to Stop System for Secret Scoring of Young Athletes » (May. 17, 2017)
    EPIC has filed a complaint with the Federal Trade Commission to stop the secret scoring of young tennis players. The EPIC complaint concerns the "Universal Tennis Rating", a proprietary algorithm used to assign numeric scores to tennis players, many of whom are children under 13. "The UTR score defines the status of young athletes in all tennis-related activity; impacts opportunities for scholarship, education and employment; and may in the future provide the basis for 'social scoring' and government rating of citizens," according to EPIC. EPIC urged the FTC to “find that a secret, unprovable, proprietary algorithm to evaluate children is an unfair and deceptive trade practice.” In 2015, EPIC launched a campaign on "Algorithmic Transparency" and has pursued several cases, including one for rating travelers and another for assessing guilt or innocence, that draw attention to the social risks of secret algorithms.
  • In Merger Reviews, EPIC Advocates for Privacy, Algorithmic Transparency » (May. 9, 2017)
    EPIC has sent a statement to the Senate Judiciary Committee ahead of a hearing on the new Antitrust Chief. EPIC urged the Committee to consider the role of consumer privacy and data protection in merger reviews. EPIC warned that "monopoly platforms" are reducing competition, stifling innovation, and undermining privacy. EPIC pointed to the FTC's failure to block the Google/DoubleClick merger which accelerated Google's dominance of Internet advertising and the WhatsApp/Facebook merger which paved the way for Facebook to access confidential WhatsApp user data. EPIC also suggested that "algorithmic transparency" would become increasingly important for merger analysis. EPIC is a leading consumer privacy advocate and regularly submits complaints urging investigations and changes to unfair business practices.
  • European Parliament Adopts Resolution on Big Data » (Mar. 24, 2017)
    The European Parliament has adopted a resolution on the fundamental rights implications of big data. The resolution stresses that "the prospects and opportunities of big data" can only be realized "when public trust in these technologies is ensured by a strong enforcement of fundamental rights and compliance with current EU data protection law." The resolution discusses the importance of data protection, accountability, transparency, data security, and privacy by design. EPIC has warned about the risks of big data and launched campaigns on "Algorithmic Transparency" and data protection.
  • EPIC Urges Senate Commerce Committee to Back Algorithmic Transparency, Safeguards for Internet of Things » (Mar. 22, 2017)
    EPIC has sent a letter to the Senate Commerce Committee concerning "The Promises and Perils of Emerging Technologies for Cybersecurity." EPIC urged the Committee to support "Algorithmic Transparency," an essential strategy to make accountable automated decisions. EPIC also pointed out the "significant privacy and security risks" of the Internet of Things. EPIC has been at the forefront of policy work on the Internet of Things and Artificial Intelligence, opposing government use of "risk-based" profiling, and recommending safeguards for connected cars, "smart homes," consumer products, and "always on" devices.
  • EPIC Sues Justice Department Over "Risk Assessment" Techniques » (Mar. 7, 2017)
    EPIC has filed a FOIA lawsuit against the Department of Justice for information about the use of "risk assessment" tools in the criminal justice system. These proprietary techniques are used to set bail, determine criminal sentences, and even contribute to determinations about guilt or innocence. Many criminal justice experts oppose their use. EPIC has pursued several FOIA cases to promote "algorithmic transparency." The EPIC cases include passenger risk assessment, "future crime" prediction, and proprietary forensic analysis. The Supreme Court is now considering whether to take a case on the use of a secretive technique to predict possible recidivism.
  • Pew Research Center Releases Report on Algorithms » (Feb. 8, 2017)
    The Pew Research Center has released a report, "Code-Dependent: Pros and Cons of the Algorithm Age." The Pew report discusses the impact that experts expect algorithms to have on individuals and society. Among the themes in the report are the biases and lack of human judgment in algorithmic decisionmaking and the need for "algorithmic literacy, transparency, and oversight." EPIC has promoted "Algorithmic Transparency" for many years and has proposed two amendments to Asimov's Laws of Robotics that would require autonomous devices to reveal the basis of their decisions and their actual identity.
  • Aspen Institute Report Explores Artificial Intelligence » (Jan. 30, 2017)
    The Aspen institute released a report on the Artificial Intelligence workshop on connected cars, healthcare, and journalism. "Artificial Intelligence Comes of Age" explored issues at "the intersection of AI technologies, society, economy, ethics and regulation." The Aspen report notes that "malicious hacks are likely to be an ongoing risk of self-driving cars" and that "because self-driving cars will generate and store vast quantities of data about driving behavior, control over this data will become a major issue." The Aspen report discusses the tension between privacy and diagnostic benefits in healthcare AI and describes "some of the alarming possible uses of AI in news media." EPIC has promoted Algorithmic Transparency and has been at the forefront of vehicle privacy through testimony before Congress, amicus briefs, and comments to the NHTSA.
  • The Verge Features EPIC FOIA Docs on Secret Profiling System » (Dec. 21, 2016)
    In an article today, The Verge featured an EPIC Freedom of Information Act lawsuit about a controversial government data mining program, operated by the Department of Homeland Security. EPIC is seeking documents on the "Analytical Framework for Intelligence," a program that assigns "risk assessment" scores to travelers using data from sources including the Automated Targeting System, also operated by the DHS. Travelers "don't know how the scores are being generated and what the factors are," said EPIC FOIA Counsel, John Tran. "What if there's an error? Users should have an opportunity to correct the error, users should have an opportunity to understand what goes into generating the score." The case is currently pending before a federal judge in Washington, DC. EPIC expects to obtain more records on AFI. The FOIA case is also related to EPIC's ongoing work on "Algorithmic Transparency."
  • European Parliament Explores Algorithmic Transparency » (Nov. 7, 2016)
    A hearing today in the European Parliament brought together technologists, ethicists, and policymakers to examine "Algorithmic Accountability and Transparency in the Digital Economy." Recently German Chancellor Angela Merkel spoke against secret algorithms, warning that that there must be more transparency and accountability. EPIC has promoted Algorithmic Transparency for many years and is currently litigating several cases on the front lines of AI, including EPIC v. FAA (drones), Cahen v. Toyota (autonomous vehicles), and algorithms in criminal justice. EPIC has also proposed two amendments to Asimov's Rules of Robotics, requiring autonomous devices to reveal the basis of their decisions and to reveal their actual identity.
  • EPIC Urges Massachusetts High Court to Protect Email Privacy » (Oct. 24, 2016)
    EPIC has filed an amicus brief in the Massachusetts Supreme Judicial Court regarding email privacy. At issue is Google's scanning of the email of non-Gmail users. EPIC argued that this is prohibited by the Massachusetts Wiretap Act. EPIC described Google's complex scanning and analysis of private communications, concluding that it was far more invasive than the interception of a telephone communications, prohibited by state law. A federal court in California recently ruled that non-Gmail users may sue Google for violation of the state wiretap law. EPIC has filed many amicus briefs in federal and state courts and participated in the successful litigation of a cellphone privacy case before the Massachusetts Judicial Court. The EPIC State Policy Project is based in Somerville, Massachusetts.
  • EPIC Promotes "Algorithmic Transparency" at Annual Meeting of Privacy Commissioners » (Oct. 20, 2016)
    Speaking at the 38th International Conference of the Data Protection and Privacy Commissioners in Marrakech, EPIC President Marc Rotenberg highlighted EPIC's recent work on algorithmic transparency and also proposed two amendments to Asimov's Rules of Robotics. Rotenberg cautioned that autonomous devices, such as drones, were gaining the rights of privacy - control over identity and secrecy of thought - that should be available only for people. Rotenberg also highlighted EPIC's recent publication "Privacy in the Modern Age", the Data Protection 2016 campaign, and the various publications available at the EPIC Bookstore. The 2017 Privacy Commissioners conference will be held in Hong Kong.
  • White House Releases Reports on Future of Artificial Intelligence » (Oct. 13, 2016)
    The White House has released two new reports on the impact of Artificial Intelligence on the US economy and related policy concerns. Preparing for the Future of Artificial Intelligence surveys the current state of AI, applications, and emerging challenges for society and public policy. The report concludes "practitioners must ensure that AI-enabled systems are governable; that they are open, transparent, and understandable; that they can work effectively with people; and that their operation will remain consistent with human values and aspirations." A companion report National Artificial Intelligence Research and Development Strategic Plan proposes a strategic plan for Federally-funded research and development in AI. President Obama will discuss these issues on October 13 at the White House Frontiers Conference in Pittsburgh. #FutureofAI EPIC has promoted Algorithmic Transparency for many years and is currently litigating several cases on the front lines of AI, including EPIC v. FAA (drones), and Cahen v. Toyota (autonomous vehicles).
  • Presidential Science Advisors Challenge Validity of Criminal Forensic Techniques » (Sep. 8, 2016)
    According to an upcoming report by the President’s Council of Advisors on Science and Technology, much of the forensic analysis in criminal trials is not scientifically valid. The report, to be released this month, attacks the validity of analysis of evidence like bite-marks, hair, and firearms. The "lack of rigor in the assessment of the scientific validity of forensic evidence is not just a hypothetical problem but a real and significant weakness in the judicial system,” wrote the council. The Senate Judiciary Committee held hearings in 2009 and 2012 to discuss the need to strengthen forensic science, and Sen. Patrick Leahy (D-VT) introduced a forensic reform bill in 2014. EPIC has pursued FOIA requests on the reliability of proprietary forensic techniques. EPIC also filed a brief on the reliability of novel forensic techniques in the Supreme Court case Florida v. Harris.
  • White House Report Points to Risks with Big Data » (May. 5, 2016)
    A new White House report "Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights" points to risks with big data analytics. According to the authors, "[t]he algorithmic systems that turn data into information are not infallible--they rely on the imperfect inputs, logic, probability, and people who design them." An earlier White House report warned of "the potential of encoding discrimination in automated decisions." EPIC launched a campaign on "Algorithmic Transparency" after warning about the risks of secretive decision making coupled with "big data."
  • At UNESCO, EPIC's Rotenberg Argues for Algorithmic Transparency » (Dec. 8, 2015)
    Speaking at UNESCO headquarters in Paris, EPIC President Marc Rotenberg explained that algorithms, complex mathematical formulas, have an increasing impact on people's lives in such areas as commerce, employment, education, and housing. He warned that processes would continue to become more opaque as more decision making was automated. He said to experts in Freedom of Expression, Communication, and Information at UNESCO that "knowledge of the algorithm is a fundamental right, a human right," EPIC has launched a new program on Algorithmic Transparency, building on the work of several members of the EPIC Advisory Board.
  • EPIC Pursues Public Release of Secret DNA Forensic Source Code » (Oct. 14, 2015)
    EPIC has filed public records requests in six states to obtain the source code of "TrueAllele," a software product used in DNA forensic analysis. According to recent news reports, law enforcement officials use TrueAllele test results to establish guilt, but individuals accused of crimes are denied access to the source code that produces the results. A similar program used by New Zealand prosecutors was recently found to have a coding error that provided incorrect results in 60 cases, including a high-profile murder case. EPIC has previously urged the US Supreme Court to carefully consider the reliability of new investigative techniques and argued a federal appeals case against DNA dragnet surveillance. Citing the importance of algorithmic transparency in the criminal justice system, EPIC filed requests in California, Louisiana, New York, Ohio, Pennsylvania, and Virginia.
  • EPIC Pursues Lawsuit about Secret Government Profiling Program » (Aug. 11, 2015)
    EPIC has filed a reply brief in federal court, rebutting the government's claim that it can withhold information about automated profiling. In EPIC v. CBP, a Freedom of Information Act case, EPIC seeks documents about the "Analytical Framework for Intelligence," which incorporates personal information from government agencies, commercial data brokers, and the Internet. The agency then uses secret, analytic tools to assign "risk assessments" to travelers, including U.S. citizens traveling solely within the United States. EPIC submitted a FOIA request in 2014 for documents relating the framework. EPIC has called for "algorithmic transparency" in automated decisions concerning individuals.
  • Facebook Applies for Patent to Collect Users' Credit Scores » (Aug. 5, 2015)
    Facebook has applied for a patent that would allow lenders to make credit decisions on a user based on the user's Facebook activity. If the patent is approved, Facebook will be able to collect the credit scores of a user's "friends" and supply a creditor with their average score. If that average is below a certain threshold, the lender will reject the application. EPIC has filed extensive comments with the Consumer Financial Protection Bureau, urging the agency to limit the amount of information creditors can access about consumers. EPIC has called for algorithmic transparency in automated decisions concerning individuals.
  • EPIC Pursues Documents about Secret Government Profiling Program » (Jul. 1, 2015)
    EPIC has filed papers in federal court challenging the government's claim that it can withhold information about automated profiling. In EPIC v. CBP, a Freedom of Information Act case, EPIC seeks documents about the "Analytical Framework for Intelligence" which incorporates personal information from government agencies, commercial data brokers, and the Internet. The agency then uses secret, analytic tools to assign "risk assessments" to travelers, including U.S. citizens traveling solely within the United States. EPIC has called for "algorithmic transparency" in automated decisions concerning individuals.
  • White House Report on "Big Data" Explores Price Discrimination, Opaque Decisionmaking » (Feb. 5, 2015)
    A White House report on Big Data and Differential Pricing released today examines new forms of discrimination resulting from big data analytics. The White House explained the risks to consumers, acknowledged the failure of self-regulatory efforts, and called for greater transparency and consumer control over their personal information. Last year, EPIC and a coalition of NGOs urged the President to establish privacy protections - including "algorithmic transparency", consumer control, and robust privacy techniques - to address Big Data risks.
  • Senators Challenge Verizon's Secret Mobile Tracking Program » (Jan. 30, 2015)
    In a letter to Verizon, Senators on the Commerce Committee challenged the company's practice of placing a "super cookie" oncustomers' smartphones. The letter follows the recent discovery that the advertising company Turn was secretly tracking Verizon customers, even after customers deleted its cookies. In the letter, the Senators asked Verizon to stop tracking users with undeletable cookies. EPIC has urged the White House and the Federal Trade Commission to limit the use of persistent identifiers. EPIC supports opt-in requirements and Privacy Enhancing Techniques for consumers, and algorithmic transparency for data collectors.
  • EPIC Urges House to Safeguard Consumer Privacy » (Jan. 26, 2015)
    EPIC has sent a statement to the House Commerce Committee for the hearing, "What are the Elements of Sound Data Breach Legislation?". EPIC had testified before the House Committee in 2011 on data breach notification, urging Congress to set a national baseline standard. EPIC also supports enactment of the Consumer Privacy Bill of Rights. EPIC also urged the House Committee to promote "algorithmic transparency." EPIC has warned that “[t]he ongoing collection of personal information in the United States without sufficient privacy safeguards has led to staggering increases in identity theft,security breaches, and financial fraud.”

Summary

Google has developed a proprietary image matching algorithm that the company uses to scan every file uploaded to Google's services for alleged child pornography. The algorithm takes in an image file, processes the data, and returns an alphanumeric string, called a "hash value", that Google then tries to match to a repository of hash values corresponding to images it has flagged as child pornography. When Google's software detects a match, the company sends a report to the National Center for Missing and Exploited Children (NCMEC), including the user's personal data such as their IP address and secondary email address. NCMEC then gathers even more personal data about the Google user to send to law enforcement. Reporting is often automatic, such that no Google employee checks whether the matched file is, in fact, contraband. In this case, Defendant's file uploads were flagged as child pornography and automatically reported to NCMEC. Defendant challenged use of the evidence at trial as a violation of his Fourth Amendment right against unreasonable search. The District Court denied the motion to suppress, finding that Google conducted a private search, and that police did not expand that search. Defendant appealed to the Ninth Circuit. EPIC also filed an amicus brief in a similar case in the Sixth Circuit, United States v. Miller.

Background

Legal Background

The Fourth Amendment only protects against searches by the government, not private entities. In United States v. Jacobsen, 466 U.S. 109, 131 (1984), the Supreme Court decided that government searches that follow private searches and are within the scope of the private search are reasonable. In Jacobsen, the Court held that the Government’s warrantless inspection and testing of the contents of a package that had been previously searched by FedEx was permissible because “there was a virtual certainty” that the law enforcement officer’s search would not reveal “anything more than he had already been told.”

The question in this case is whether the Government has provided sufficient evidence to establish that there was "virtual certainty" that the files Google sent in a CyberTipline Report to the NCMEC, and were ultimately opened by police, were the same as those a Google employee previously viewed.

Factual Background

Google maintains a prorprietary image matching system that automatically scans files uploaded to Google products, including Gmail, to search for child pornography. The defendant uploaded two images to Google's e-mail system, which flagged the images as "apparent child pornography." Google's system flagged the defedant's images, and then automatically generated and submitted “CyberTip Report # 5778397” to the National Center for Missing and Exploited Children (“NCMEC”) with the following information:

  • the date and time of the incident;
  • the e-mail address associated with the user account that uploaded the file;
  • the IP address associated witht he upload;
  • a list of IP addresses used to access the user account (which can go as far back as the original account registration date);
  • the secondary email address associated with the account;
  • the filename;
  • the "categorization" of the image based on an existing rubric; and
  • copies of te image files(s);

Google was required by law to submit this CyberTipline report once it became aware of apparent child pornography. 18 U.S.C. § 2258A.

When NCMEC received Google's CyberTipline report, NCMEC staff initiated a websearch for the email and IP addresses associated with the report without opening the images sent by Google to confirm that they were contraband. NCMEC identifies information associated with the user’s IP address(es): Country, Region, City, Metro Code, Postal Code, Area Code, Latitude/Longitude, and Internet Service Provider or Organization. NCMEC staff also collect "data gathered from searches on publicly-available, open-source websites" using the account and user identifying informatiomn provided by the CyberTipline report. This information can include social media profiles, websites, addresses, and other personal data.

After NCMEC staff collected this information on defendant, the report was referred to local police for potential investigation. A detective opened the images attached to the Cybertipline report and confirmed they were child pornograpy.

The extent of what is known about Google’s practices in using the hashing technology is described in the declaration of Cathy McGoff, a Senior Manager for Law Enforcment and Information Security at Google:

4. Based on [Google’s] private non-government interests, since 2008, Google has been using its own proprietary hashing technology to tag confirmed child sexual abuse images. Each offending image, after it is viewed by at least one Google employee is given a digital fingerprint (“hash”) that our computers can automatically recognize and is added to our repository of hashes of apparent child pornography as defined in 18 USC § 2256. Comparing these hashes to hashes of content uploaded to our services allows us to identify duplicate images of apparent child pornography to prevent them from continuing to circulate on our products.

5. We also rely on users who flag suspicious content they encounter so we can review it and help expand our database of illegal images. No hash is added to our respository without a corresponding image first having been visually confirmed by a Google employee to be apparent child pornography.

6. Google trains a team of employees on the legal obligation to report apparent child pornography. The team is trained by counsel on the federal statutory definition of child pornography and how to recognize it on our products and services. Google makes reports in accordance with that training.

7. When Google’s product abuse detection system encounters a hash that matches a hash of a known child sexual abuse image, in some cases Google automatically reports the user to NCMEC without re-reviewing the image. In other cases, Google undertakes a manual, human review, to confirm that the image contains apparent child pornography before reporting it to NCMEC.

While Google describes its algorithm as assigning each image in its repository a "digital fingerprint," there is no information provided on the type of hash function Google uses to assign this "digital fingerprint." This is important becasue file hashing functions work differently than image hashing functions. File hashing functions create a unique hash value for a file, and changing one bit of data will change the hash value of the file. File hashing is a method of demonstrating that two files are the same, bit-for-bit, without comparing each bit to the corresponding bit of the other file, which is very time and resource consuming. In contrast, image hashing algorithms provide a way to match images even if they have been altered slightly, but also enable by design the matching of files that do not have the same file-hash values.

Procedural History

The Defendant filed a motion to suppress the email, its attachments, and all other evidence obtained subsequently. He argued that Google acted as a government agent in this case and that it was therefore an unreasonable warrantless search under the Fourth Amendment. The Defendant also argues that the Detective's search exceeded the scope of Google's private search. The district court disagreed. In denying the motion to supress, the district court found that Google's search was a private search and that the police did not exceed the scope of the private search because there was a "virtual certainty" that a Google employee had previously viewed the images before the police did so. The district court relied upon Google's representation that its algorithm assigns each image in its database a "digital fingerprint" that is "unique." Defendant was subsequently convicted on several counts and appealed to the U.S. Court of Appeals for the Ninth Circuit.

EPIC's Interest

EPIC seeks to ensure that Fourth Amendment protections keep pace with advances in technology. For instance, EPIC filed an amicus brief before the Supreme Court in Carpenter v. United States arguing that the technological changes justified broader Fourth Amendment protections. The Court declined to extend the “third party doctrine” to permit the warrantless collection of cell site location information. Here, EPIC has an interest in ensuring that the Government does not conduct warrantless searches based on proprietary and potentially unreliable algorithmic search techniques.

This case also implicates questions about the standard of proof required to demonstrate the validity of a new investigative technique, an issue EPIC has advised the courts on previously. EPIC advised the Supreme Court about this issue as amicus curiae in Florida v. Harris, arguing that the government should bear the burden of establishing the reliability of investigative techniques in criminal cases.

EPIC has promoted algorithmic transparency for many years. EPIC has also litigated several cases where algorithms used to make decisions that impact individuals were withheld from the public.

EPIC filed an amicus brief in a similar case in the Sixth Circuit, United States v. Miller.

Legal Documents

State Case

California Supreme Court, No. S265795

California Court of Appeals, Fourth Appellate District, Division One, No. D074992

Federal Case

U.S. Court of Appeals for the Ninth Circuit, No. 18-50440

U.S. District Court for the Southern District of California, No. 15-2838

Resources

Share this page:

Defend Privacy. Support EPIC.
US Needs a Data Protection Agency
2020 Election Security