by Gayatri Gupta

The Fair Housing Act (“The Act”), enacted in 1968, prohibits discrimination in selling, renting or financing of houses on the basis of race, color, religion, sex, familial status, or national origin. The Act also prohibits housing related advertisements that prefer, limit, or discriminate against people on the basis of their race, color, religion, sex, handicap, familial status, or national origin. Government practices such as the Home Owners Loan Corporation’s policy to consider racial composition in assessing risk or Federal Housing Administration’s “white-only requirement” in appraisal standard helped institutionalize residential segregation on the basis of race.[1] Moreover, practices such as steering, blockbusting, redlining, economic zoning policies etc also contributed in the creation of urban African American slums. In such a backdrop, the Fair Housing Act was introduced with an aim to provide “for fair housing throughout the United States”and redress residential segregation created and sustained by private actors and discriminatory government practices.

The Act faces a new challenge from discriminatory effects of algorithmic and big data driven technology. Studies have shown that algorithms deployed in criminal justice system, and banking industry are biased against people of color. A recent research paper showed that with online mortgage applications, Black and Latinx borrowers had to pay 5.3 basis points more in interest when purchasing houses. Platforms such as Facebookhave been accused of employing algorithms that allow advertisers to target their ads by race. A study conducted by professors from Northeastern University found significant skewed results for Facebook’s housing advertisement delivery along racial lines, for e.g., houses for sales shown to higher fraction of white users than people of color. In March 2019, Facebook was also sued by the Department of Housing and Urban Development (“HUD”) for engaging in discriminatory advertising practices and violating the Fair Housing Act.

Such instances suggest that racial bias can easily make its way into algorithms and constructed data set. The algorithms inherent human biases during the process of data mining that can result in amplifying institutional discrimination. With the increasing adoption of algorithms in decision making, unintentional acts of discrimination are more common and more difficult to detect. Since, it becomes difficult to identify a discriminatory intent, victims have to adopt the “disparate impact” doctrine. Disparate Impact occurs when policies and practices appear to be neutral, but result in a disproportionately adverse impact on a protected group.[2] In 2015, the Supreme Court in Texas Dept. of Housing and  Community Affairs v. Inclusive Communities Project, Incruled that certain sections of the Fair Housing Act include a disparate impact standard of liability. While observing the importance of The Act in making America a more integrated society, the Court also laid down certain “cautionary standards” in applying the disparate impact doctrine.

However, HUD, the entity tasked with enforcing the Fair Housing Act, is considering adopting a new rule that has the potential to weaken the disparate impact doctrine. The proposed rule makes it harder for plaintiffs to prove a disparate impact claim by requiring plaintiffs to meet a new five-pronged standard. Furthermore, the rule also provides three special defences for businesses, banks, insurance companies, and landlords that use algorithmic models that have discriminatory effects.

The three suggested defences allow a defendant to escape liability if he is able to show that: (i) the algorithmic model does not use inputs that are substitutes or close proxies for protected classes, and that each factor has a valid objective, (ii) the model used is a standard in the industry or the model is the responsibility of a third party, and (iii) a third independent party verifies that the model is not the cause of a disparate impact. Such defences exemplify a misunderstanding of the operation of algorithms by HUD. The problem of using a single feature like ZIP code/ economic status as a proxy for race is well known. Algorithms further complicate the problem by relying on a combination of features that can find patterns excluding people in marginal groups. The third- party defence removes any incentive for house providers, moneylenders, or insurance companies to select algorithms that are non-discriminatory against people of color. Moreover, it enables such entities to shift the burden on third-party algorithm creators who generally rely on proprietary nature of technology and trade secrets law to resist divulging the design and functioning of the model.

Thus, even if a plaintiff is able to meet the higher standard of disparate impact, the three defences can absolve a defendant using discriminatory algorithms. The new rule, if passed,  would make it difficult to successfully bring a disparate impact claim against entities using algorithmic models, virtually immunizing subtle forms of racial discrimination in housing.

[1]Richard Rothstein, The Color of Law, 64-56 (Liveright Publishing Corporation 2017).

[2]Griggs v. Duke Power Co., 401 U.S. 424 (1971).