S. 1169 2
both before a product is launched and throughout the product's life
cycle.
(d) New York must establish that the burden of responsibility of prov-
ing that AI products do not cause harm to New Yorkers will be shouldered
by the developers and deployers of AI. While government and civil socie-
ty must act to audit and enforce human rights laws around the use of AI,
the companies employing and profiting from the use of AI must lead in
ensuring that their products are free from algorithmic discrimination.
(e) Close collaboration and communication between New York state and
industry partners is key to ensuring that innovation can occur with
safeguards to protect all New Yorkers. This legislation will ensure that
lines of communication exist and that there is clear statutory authority
to investigate and prosecute entities that break the law.
(f) As new forms of AI are developed beyond what is currently techno-
logically feasible, the goal of the legislature is to use this section
as a guiding light for future regulations.
(g) Lastly, it is in the interest of all New Yorkers that certain uses
of AI that infringe on fundamental rights, deepen structural inequality,
or that result in unequal access to services shall be banned.
§ 3. The civil rights law is amended by adding a new article 8-A to
read as follows:
ARTICLE 8-A
PROTECTIONS REGARDING USE OF ARTIFICIAL INTELLIGENCE
SECTION 85. DEFINITIONS.
86. UNLAWFUL DISCRIMINATORY PRACTICES.
86-A. DEPLOYER AND DEVELOPER OBLIGATIONS.
86-B. WHISTLEBLOWER PROTECTIONS.
87. AUDITS.
88. HIGH-RISK AI SYSTEM REPORTING REQUIREMENTS.
89. RISK MANAGEMENT POLICY AND PROGRAM.
89-A. SOCIAL SCORING AI SYSTEMS PROHIBITED.
89-B. ENFORCEMENT.
§ 85. DEFINITIONS. THE FOLLOWING TERMS SHALL HAVE THE FOLLOWING MEAN-
INGS:
1. "ALGORITHMIC DISCRIMINATION" MEANS ANY CONDITION IN WHICH THE USE
OF AN AI SYSTEM CONTRIBUTES TO UNJUSTIFIED DIFFERENTIAL TREATMENT OR
IMPACTS, DISFAVORING PEOPLE BASED ON THEIR ACTUAL OR PERCEIVED AGE,
RACE, ETHNICITY, CREED, RELIGION, COLOR, NATIONAL ORIGIN, CITIZENSHIP OR
IMMIGRATION STATUS, SEXUAL ORIENTATION, GENDER IDENTITY, GENDER
EXPRESSION, MILITARY STATUS, SEX, DISABILITY, PREDISPOSING GENETIC CHAR-
ACTERISTICS, FAMILIAL STATUS, MARITAL STATUS, PREGNANCY, PREGNANCY
OUTCOMES, DISABILITY, HEIGHT, WEIGHT, REPRODUCTIVE HEALTH CARE OR AUTON-
OMY, STATUS AS A VICTIM OF DOMESTIC VIOLENCE OR OTHER CLASSIFICATION
PROTECTED UNDER STATE OR FEDERAL LAWS. ALGORITHMIC DISCRIMINATION SHALL
NOT INCLUDE:
(A) A DEVELOPER'S OR DEPLOYER'S TESTING OF THEIR OWN AI SYSTEM TO
IDENTIFY, MITIGATE, AND PREVENT DISCRIMINATORY BIAS;
(B) EXPANDING AN APPLICANT, CUSTOMER, OR PARTICIPANT POOL TO INCREASE
DIVERSITY OR REDRESS HISTORICAL DISCRIMINATION; OR
(C) AN ACT OR OMISSION BY OR ON BEHALF OF A PRIVATE CLUB OR OTHER
ESTABLISHMENT THAT IS NOT IN FACT OPEN TO THE PUBLIC, AS SET FORTH IN
TITLE II OF THE FEDERAL CIVIL RIGHTS ACT OF 1964, 42 U.S.C. SECTION
2000A(E), AS AMENDED.
2. "ARTIFICIAL INTELLIGENCE SYSTEM" OR "AI SYSTEM" MEANS A MACHINE-
BASED SYSTEM OR COMBINATION OF SYSTEMS, THAT FOR EXPLICIT AND IMPLICIT
OBJECTIVES, INFERS, FROM THE INPUT IT RECEIVES, HOW TO GENERATE OUTPUTS
S. 1169 3
SUCH AS PREDICTIONS, CONTENT, RECOMMENDATIONS, OR DECISIONS THAT CAN
INFLUENCE PHYSICAL OR VIRTUAL ENVIRONMENTS. ARTIFICIAL INTELLIGENCE
SHALL NOT INCLUDE ANY SOFTWARE USED PRIMARILY FOR BASIC COMPUTERIZED
PROCESSES, SUCH AS ANTI-MALWARE, ANTI-VIRUS, AUTO-CORRECT FUNCTIONS,
CALCULATORS, DATABASES, DATA STORAGE, ELECTRONIC COMMUNICATIONS, FIRE-
WALL, INTERNET DOMAIN REGISTRATION, INTERNET WEBSITE LOADING, NETWORK-
ING, SPAM AND ROBOCALL-FILTERING, SPELLCHECK TOOLS, SPREADSHEETS, WEB
CACHING, WEB HOSTING, OR ANY TOOL THAT RELATES ONLY TO INTERNAL MANAGE-
MENT AFFAIRS SUCH AS ORDERING OFFICE SUPPLIES OR PROCESSING PAYMENTS,
AND THAT DO NOT MATERIALLY AFFECT THE RIGHTS, LIBERTIES, BENEFITS, SAFE-
TY OR WELFARE OF ANY INDIVIDUAL WITHIN THE STATE.
3. "AUDITOR" SHALL REFER TO AN INDEPENDENT ENTITY INCLUDING BUT NOT
LIMITED TO AN INDIVIDUAL, NON-PROFIT, FIRM, CORPORATION, PARTNERSHIP,
COOPERATIVE, OR ASSOCIATION COMMISSIONED TO PERFORM AN AUDIT.
4. "CONSEQUENTIAL DECISION" MEANS A DECISION OR JUDGMENT THAT HAS A
MATERIAL, LEGAL OR SIMILARLY SIGNIFICANT EFFECT ON AN INDIVIDUAL'S LIFE
RELATING TO THE IMPACT OF, ACCESS TO, OR THE COST, TERMS, OR AVAILABILI-
TY OF, ANY OF THE FOLLOWING:
(A) EMPLOYMENT, WORKERS' MANAGEMENT, OR SELF-EMPLOYMENT, INCLUDING,
BUT NOT LIMITED TO, ALL OF THE FOLLOWING:
(I) PAY OR PROMOTION;
(II) HIRING OR TERMINATION; AND
(III) AUTOMATED TASK ALLOCATION.
(B) EDUCATION AND VOCATIONAL TRAINING, INCLUDING, BUT NOT LIMITED TO,
ALL OF THE FOLLOWING:
(I) ASSESSMENT OR GRADING, INCLUDING, BUT NOT LIMITED TO, DETECTING
STUDENT CHEATING OR PLAGIARISM;
(II) ACCREDITATION;
(III) CERTIFICATION;
(IV) ADMISSIONS; AND
(V) FINANCIAL AID OR SCHOLARSHIPS.
(C) HOUSING OR LODGING, INCLUDING RENTAL OR SHORT-TERM HOUSING OR
LODGING.
(D) ESSENTIAL UTILITIES, INCLUDING ELECTRICITY, HEAT, WATER, INTERNET
OR TELECOMMUNICATIONS ACCESS, OR TRANSPORTATION.
(E) FAMILY PLANNING, INCLUDING ADOPTION SERVICES OR REPRODUCTIVE
SERVICES, AS WELL AS ASSESSMENTS RELATED TO CHILD PROTECTIVE SERVICES.
(F) HEALTH CARE OR HEALTH INSURANCE, INCLUDING MENTAL HEALTH CARE,
DENTAL, OR VISION.
(G) FINANCIAL SERVICES, INCLUDING A FINANCIAL SERVICE PROVIDED BY A
MORTGAGE COMPANY, MORTGAGE BROKER, OR CREDITOR.
(H) LAW ENFORCEMENT ACTIVITIES, INCLUDING THE ALLOCATION OF LAW
ENFORCEMENT PERSONNEL OR ASSETS, THE ENFORCEMENT OF LAWS, MAINTAINING
PUBLIC ORDER, OR MANAGING PUBLIC SAFETY.
(I) GOVERNMENT SERVICES.
(J) LEGAL SERVICES.
5. "DEPLOYER" MEANS A PERSON, PARTNERSHIP, ASSOCIATION OR CORPORATION
THAT USES AN AI SYSTEM OR COMMERCE IN THE STATE OF NEW YORK OR PROVIDES
AN AI SYSTEM FOR USE BY THE GENERAL PUBLIC IN THE STATE OF NEW YORK. A
DEVELOPER MAY ALSO BE CONSIDERED A DEPLOYER IF ITS ACTIONS SATISFY THIS
DEFINITION.
6. "DEPLOYER-EMPLOYER" MEANS A DEPLOYER THAT IS AN EMPLOYER.
7. "DEVELOPER" MEANS A PERSON, PARTNERSHIP, OR CORPORATION THAT
DESIGNS, CODES, OR PRODUCES AN AI SYSTEM, OR CREATES A SUBSTANTIAL
CHANGE WITH RESPECT TO AN AI SYSTEM, WHETHER FOR ITS OWN USE IN THE
STATE OF NEW YORK OR FOR USE BY A THIRD PARTY IN THE STATE OF NEW YORK.
S. 1169 4
8. "DEVELOPER-EMPLOYER" MEANS A DEVELOPER THAT IS AN EMPLOYER.
9. "EMPLOYEE" MEANS AN INDIVIDUAL WHO PERFORMS SERVICES FOR AND UNDER
THE CONTROL AND DIRECTION OF AN EMPLOYER FOR WAGES OR OTHER REMUNERA-
TION, INCLUDING FORMER EMPLOYEES, OR NATURAL PERSONS EMPLOYED AS INDE-
PENDENT CONTRACTORS TO CARRY OUT WORK IN FURTHERANCE OF AN EMPLOYER'S
BUSINESS ENTERPRISE WHO ARE NOT THEMSELVES EMPLOYERS.
10. "EMPLOYER" MEANS ANY PERSON, FIRM, PARTNERSHIP, INSTITUTION,
CORPORATION, OR ASSOCIATION THAT EMPLOYS ONE OR MORE EMPLOYEES.
11. "END USER" MEANS ANY INDIVIDUAL OR GROUP OF INDIVIDUALS THAT:
(A) IS THE SUBJECT OF A CONSEQUENTIAL DECISION MADE ENTIRELY BY OR
WITH THE ASSISTANCE OF AN AI SYSTEM; OR
(B) INTERACTS, DIRECTLY OR INDIRECTLY, WITH THE RELEVANT AI SYSTEM ON
BEHALF OF AN INDIVIDUAL OR GROUP THAT IS THE SUBJECT OF A CONSEQUENTIAL
DECISION MADE ENTIRELY BY OR WITH THE ASSISTANCE OF AN AI SYSTEM.
12. "HIGH-RISK AI SYSTEM" MEANS ANY AI SYSTEM THAT, WHEN DEPLOYED:
(A) IS A SUBSTANTIAL FACTOR IN MAKING A CONSEQUENTIAL DECISION; OR (B)
WILL HAVE A MATERIAL IMPACT ON THE STATUTORY OR CONSTITUTIONAL RIGHTS,
CIVIL LIBERTIES, SAFETY, OR WELFARE OF AN INDIVIDUAL IN THE STATE.
13. "SOFTWARE STACK" MEANS THE GROUP OF INDIVIDUAL SOFTWARE COMPONENTS
THAT WORK TOGETHER TO SUPPORT THE EXECUTION OF AN AI SYSTEM.
14. "SUBSTANTIAL CHANGE" MEANS ANY (A) DELIBERATE MODIFICATION TO AN
AI SYSTEM THAT WOULD RESULT IN MATERIAL INACCURACIES IN THE REPORTS
CREATED UNDER SECTION EIGHTY-EIGHT OF THIS ARTICLE; OR (B) UNINTENTIONAL
AND SUBSTANTIAL CHANGE IN THE DATA THAT THE AI SYSTEM USES AS INPUT
DATA.
15. "SUBSTANTIAL FACTOR" MEANS A FACTOR THAT ASSISTS IN MAKING A
CONSEQUENTIAL DECISION OR IS CAPABLE OF ALTERING THE OUTCOME OF A CONSE-
QUENTIAL DECISION. "SUBSTANTIAL FACTOR" INCLUDES, BUT IS NOT LIMITED TO,
ANY USE OF AN AI SYSTEM TO GENERATE ANY CONTENT, DECISION, PREDICTION,
OR RECOMMENDATION THAT IS USED AS A BASIS, IN WHOLE OR IN PART, TO MAKE
A CONSEQUENTIAL DECISION REGARDING AN END USER.
§ 86. UNLAWFUL DISCRIMINATORY PRACTICES. IT SHALL BE AN UNLAWFUL
DISCRIMINATORY PRACTICE:
1. FOR A DEVELOPER OR DEPLOYER TO USE, SELL, OR SHARE A HIGH-RISK AI
SYSTEM OR A PRODUCT FEATURING A HIGH-RISK AI SYSTEM THAT PRODUCES ALGO-
RITHMIC DISCRIMINATION; OR
2. FOR A DEVELOPER TO USE, SELL, OR SHARE A HIGH-RISK AI SYSTEM OR A
PRODUCT FEATURING A HIGH-RISK AI SYSTEM THAT HAS NOT PASSED AN INDEPEND-
ENT AUDIT, IN ACCORDANCE WITH SECTION EIGHTY-SEVEN OF THIS ARTICLE, THAT
HAS FOUND THAT THE PRODUCT DOES NOT IN FACT PRODUCE ALGORITHMIC DISCRIM-
INATION.
§ 86-A. DEPLOYER AND DEVELOPER OBLIGATIONS. 1. (A) ANY DEPLOYER THAT
EMPLOYS A HIGH-RISK AI SYSTEM FOR A CONSEQUENTIAL DECISION MUST INFORM
THE END USER AT LEAST FIVE BUSINESS DAYS PRIOR TO THE USE OF SUCH SYSTEM
FOR THE MAKING OF A CONSEQUENTIAL DECISION IN CLEAR, CONSPICUOUS, AND
CONSUMER-FRIENDLY TERMS, MADE AVAILABLE IN EACH OF THE LANGUAGES IN
WHICH THE COMPANY OFFERS ITS END SERVICES, THAT AI SYSTEMS WILL BE USED
TO MAKE A DECISION OR TO ASSIST IN MAKING A DECISION. THE DEPLOYER MUST
ALLOW SUFFICIENT TIME AND OPPORTUNITY IN A CLEAR, CONSPICUOUS, AND
CONSUMER-FRIENDLY MANNER FOR THE CONSUMER TO OPT-OUT OF THE AUTOMATED
PROCESS AND FOR THE DECISION TO BE MADE BY A HUMAN REPRESENTATIVE. A
CONSUMER MAY NOT BE PUNISHED OR FACE ANY OTHER ADVERSE ACTION FOR OPTING
OUT OF A DECISION BY AN AI SYSTEM AND THE DEPLOYER MUST RENDER A DECI-
SION TO THE CONSUMER WITHIN FORTY-FIVE DAYS.
(B) IF A DEPLOYER EMPLOYS A HIGH-RISK AI SYSTEM FOR A CONSEQUENTIAL
DECISION TO DETERMINE WHETHER TO OR ON WHAT TERMS TO CONFER A BENEFIT ON
S. 1169 5
AN END USER, THE DEPLOYER SHALL OFFER THE END USER THE OPTION TO WAIVE
THEIR RIGHT TO ADVANCE NOTICE OF FIVE BUSINESS DAYS UNDER THIS SUBDIVI-
SION.
(C) IF THE END USER CLEARLY AND AFFIRMATIVELY WAIVES THEIR RIGHT TO
FIVE BUSINESS DAYS' NOTICE, THE DEPLOYER SHALL THEN INFORM THE END USER
AT LEAST ONE BUSINESS DAY BEFORE THE MAKING OF THE CONSEQUENTIAL DECI-
SION IN CLEAR, CONSPICUOUS, AND CONSUMER-FRIENDLY TERMS, MADE AVAILABLE
IN EACH OF THE LANGUAGES IN WHICH THE COMPANY OFFERS ITS END
SERVICES, THAT AI SYSTEMS WILL BE USED TO MAKE A DECISION OR TO ASSIST
IN MAKING A DECISION. THE DEPLOYER MUST ALLOW SUFFICIENT TIME
AND OPPORTUNITY IN A CLEAR, CONSPICUOUS, AND CONSUMER-FRIENDLY MANNER
FOR THE CONSUMER TO OPT-OUT OF THE AUTOMATED PROCESS AND FOR THE
DECISION TO BE MADE BY A HUMAN REPRESENTATIVE. A CONSUMER MAY NOT BE
PUNISHED OR FACE ANY OTHER ADVERSE ACTION FOR OPTING OUT OF A DECI-
SION BY AN AI SYSTEM AND THE DEPLOYER MUST RENDER A DECISION TO THE
CONSUMER WITHIN FORTY-FIVE DAYS.
2. ANY DEPLOYER THAT EMPLOYS A HIGH-RISK AI SYSTEM FOR A CONSEQUENTIAL
DECISION MUST INFORM THE END USER WITHIN FIVE DAYS IN A CLEAR, CONSPICU-
OUS, AND CONSUMER-FRIENDLY MANNER IF A CONSEQUENTIAL DECISION HAS BEEN
MADE ENTIRELY BY OR WITH ASSISTANCE OF AN AUTOMATED SYSTEM. THE DEPLOY-
ER MUST THEN PROVIDE AND EXPLAIN A PROCESS FOR THE END USER TO APPEAL
THE DECISION, WHICH MUST AT MINIMUM ALLOW THE END USER TO (A) FORMALLY
CONTEST THE DECISION, (B) PROVIDE INFORMATION TO SUPPORT THEIR POSITION,
AND (C) OBTAIN MEANINGFUL HUMAN REVIEW OF THE DECISION. A DEPLOYER MUST
RESPOND TO AN END USER'S APPEAL WITHIN FORTY-FIVE DAYS OF RECEIPT OF THE
APPEAL. THAT PERIOD MAY BE EXTENDED ONCE BY FORTY-FIVE ADDITIONAL DAYS
WHERE REASONABLY NECESSARY, TAKING INTO ACCOUNT THE COMPLEXITY AND
NUMBER OF APPEALS. THE DEPLOYER MUST INFORM THE END USER OF ANY SUCH
EXTENSION WITHIN FORTY-FIVE DAYS OF RECEIPT OF THE APPEAL, TOGETHER WITH
THE REASONS FOR THE DELAY.
3. THE DEPLOYER OR DEVELOPER OF A HIGH-RISK AI SYSTEM IS LEGALLY
RESPONSIBLE FOR QUALITY AND ACCURACY OF ALL CONSEQUENTIAL DECISIONS
MADE, INCLUDING ANY BIAS, ALGORITHMIC DISCRIMINATION, AND/OR MISINFOR-
MATION RESULTING FROM THE OPERATION OF THE AI SYSTEM.
4. THE RIGHTS AND OBLIGATIONS UNDER THIS SECTION MAY NOT BE WAIVED BY
ANY PERSON, PARTNERSHIP, ASSOCIATION OR CORPORATION.
§ 86-B. WHISTLEBLOWER PROTECTIONS. 1. DEVELOPER-EMPLOYERS AND/OR
DEPLOYER-EMPLOYERS OF HIGH-RISK AI SYSTEMS SHALL NOT:
(A) PREVENT AN EMPLOYEE FROM DISCLOSING INFORMATION TO THE ATTORNEY
GENERAL, INCLUDING THROUGH TERMS AND CONDITIONS OF EMPLOYMENT OR SEEKING
TO ENFORCE TERMS AND CONDITIONS OF EMPLOYMENT, IF THE EMPLOYEE HAS
REASONABLE CAUSE TO BELIEVE THE INFORMATION INDICATES A VIOLATION OF
THIS ARTICLE; OR
(B) RETALIATE AGAINST AN EMPLOYEE FOR DISCLOSING INFORMATION TO THE
ATTORNEY GENERAL PURSUANT TO THIS SECTION.
2. AN EMPLOYEE HARMED BY A VIOLATION OF THIS ARTICLE MAY PETITION A
COURT FOR APPROPRIATE RELIEF AS PROVIDED IN SUBDIVISION FIVE OF SECTION
SEVEN HUNDRED FORTY OF THE LABOR LAW.
3. DEVELOPER-EMPLOYERS AND DEPLOYER-EMPLOYERS OF HIGH-RISK AI SYSTEMS
SHALL PROVIDE A CLEAR NOTICE TO ALL EMPLOYEES WORKING ON SUCH AI SYSTEMS
OF THEIR RIGHTS AND RESPONSIBILITIES UNDER THIS ARTICLE, INCLUDING THE
RIGHT OF EMPLOYEES OF CONTRACTORS AND SUBCONTRACTORS TO USE THE DEVELOP-
ER'S INTERNAL PROCESS FOR MAKING PROTECTED DISCLOSURES PURSUANT TO
SUBDIVISION FOUR OF THIS SECTION. A DEVELOPER-EMPLOYER OR DEPLOYER-EM-
PLOYER IS PRESUMED TO BE IN COMPLIANCE WITH THE REQUIREMENTS OF THIS
S. 1169 6
SUBDIVISION IF THE DEVELOPER-EMPLOYER OR DEPLOYER-EMPLOYER DOES EITHER
OF THE FOLLOWING:
(A) AT ALL TIMES POST AND DISPLAY WITHIN ALL WORKPLACES MAINTAINED BY
THE DEVELOPER-EMPLOYER OR DEPLOYER-EMPLOYER A NOTICE TO ALL EMPLOYEES OF
THEIR RIGHTS AND RESPONSIBILITIES UNDER THIS ARTICLE, ENSURE THAT ALL
NEW EMPLOYEES RECEIVE EQUIVALENT NOTICE, AND ENSURE THAT EMPLOYEES WHO
WORK REMOTELY PERIODICALLY RECEIVE AN EQUIVALENT NOTICE; OR
(B) NO LESS FREQUENTLY THAN ONCE EVERY YEAR, PROVIDES WRITTEN NOTICE
TO ALL EMPLOYEES OF THEIR RIGHTS AND RESPONSIBILITIES UNDER THIS ARTICLE
AND ENSURES THAT THE NOTICE IS RECEIVED AND ACKNOWLEDGED BY ALL OF THOSE
EMPLOYEES.
4. EACH DEVELOPER-EMPLOYER AND DEPLOYER-EMPLOYER SHALL PROVIDE A
REASONABLE INTERNAL PROCESS THROUGH WHICH AN EMPLOYEE MAY ANONYMOUSLY
DISCLOSE INFORMATION TO THE DEVELOPER IF THE EMPLOYEE BELIEVES IN GOOD
FAITH THAT THE INFORMATION INDICATES THAT THE DEVELOPER HAS VIOLATED ANY
PROVISION OF THIS ARTICLE OR ANY OTHER LAW, OR HAS MADE FALSE OR MATE-
RIALLY MISLEADING STATEMENTS RELATED TO ITS SAFETY AND SECURITY PROTO-
COL, OR FAILED TO DISCLOSE KNOWN RISKS TO EMPLOYEES, INCLUDING, AT A
MINIMUM, A MONTHLY UPDATE TO THE PERSON WHO MADE THE DISCLOSURE REGARD-
ING THE STATUS OF THE DEVELOPER'S INVESTIGATION OF THE DISCLOSURE AND
THE ACTIONS TAKEN BY THE DEVELOPER IN RESPONSE TO THE DISCLOSURE.
5. THIS SECTION DOES NOT LIMIT PROTECTIONS PROVIDED TO EMPLOYEES UNDER
SECTION SEVEN HUNDRED FORTY OF THE LABOR LAW.
§ 87. AUDITS. 1. PRIOR TO DEPLOYMENT OF A HIGH-RISK AI SYSTEM, SIX
MONTHS AFTER DEPLOYMENT, AND AT LEAST EVERY EIGHTEEN MONTHS THEREAFTER
FOR EACH CALENDAR YEAR A HIGH-RISK AI SYSTEM IS IN USE AFTER THE FIRST
POST-DEPLOYMENT AUDIT, EVERY DEVELOPER OR DEPLOYER OF A HIGH-RISK AI
SYSTEM SHALL CAUSE TO BE CONDUCTED AT LEAST ONE THIRD-PARTY AUDIT IN
COMPLIANCE WITH THE PROVISIONS OF THIS SECTION TO ENSURE THAT THE PROD-
UCT DOES NOT PRODUCE ALGORITHMIC DISCRIMINATION AND COMPLIES WITH THE
PROVISIONS OF THIS ARTICLE. REGARDLESS OF FINAL FINDINGS, THE DEPLOYER
OR DEVELOPER SHALL DELIVER ALL AUDITS CONDUCTED UNDER THIS SECTION TO
THE ATTORNEY GENERAL.
2. A DEPLOYER OR DEVELOPER MAY HIRE MORE THAN ONE AUDITOR TO FULFILL
THE REQUIREMENTS OF THIS SECTION.
3. THE AUDIT SHALL INCLUDE THE FOLLOWING:
(A) AN ANALYSIS OF DATA MANAGEMENT POLICIES INCLUDING WHETHER PERSONAL
OR SENSITIVE DATA RELATING TO A CONSUMER IS SUBJECT TO DATA SECURITY
PROTECTION STANDARDS THAT COMPLY WITH THE REQUIREMENTS OF SECTION EIGHT
HUNDRED NINETY-NINE-BB OF THE GENERAL BUSINESS LAW;
(B) AN ANALYSIS OF THE SYSTEM ACCURACY AND RELIABILITY ACCORDING TO
EACH SPECIFIED USE CASE LISTED IN THE ENTITY'S REPORTING DOCUMENT FILED
BY THE DEVELOPER OR DEPLOYER UNDER SECTION EIGHTY-EIGHT OF THIS ARTICLE;
(C) DISPARATE IMPACTS AND A DETERMINATION OF WHETHER THE PRODUCT
PRODUCES ALGORITHMIC DISCRIMINATION IN VIOLATION OF THIS ARTICLE BY EACH
INTENDED AND FORESEEABLE IDENTIFIED USE AS IDENTIFIED BY THE DEPLOYER
AND DEVELOPER;
(D) ANALYSIS OF HOW THE TECHNOLOGY COMPLIES WITH EXISTING RELEVANT
FEDERAL, STATE, AND LOCAL PRIVACY AND DATA PRIVACY LAWS; AND
(E) AN EVALUATION OF THE DEVELOPER'S OR DEPLOYER'S DOCUMENTED RISK
MANAGEMENT POLICY AND PROGRAM REQUIRED UNDER SECTION EIGHTY-NINE OF THIS
ARTICLE FOR CONFORMITY WITH SUBDIVISION ONE OF SUCH SECTION EIGHTY-NINE.
4. THE ATTORNEY GENERAL MAY PROMULGATE FURTHER RULES AS NECESSARY TO
ENSURE THAT AUDITS UNDER THIS SECTION ASSESS WHETHER OR NOT AI SYSTEMS
PRODUCE ALGORITHMIC DISCRIMINATION AND OTHERWISE COMPLY WITH THE
PROVISIONS OF THIS ARTICLE.
S. 1169 7
5. THE INDEPENDENT AUDITOR SHALL HAVE COMPLETE AND UNREDACTED COPIES
OF ALL REPORTS PREVIOUSLY FILED BY THE DEPLOYER OR DEVELOPER UNDER
SECTION EIGHTY-EIGHT OF THIS ARTICLE.
6. AN AUDIT CONDUCTED UNDER THIS SECTION SHALL BE COMPLETED IN ITS
ENTIRETY WITHOUT THE ASSISTANCE OF AN AI SYSTEM.
7. (A) AN AUDITOR SHALL BE AN INDEPENDENT ENTITY INCLUDING BUT NOT
LIMITED TO AN INDIVIDUAL, NON-PROFIT, FIRM, CORPORATION, PARTNERSHIP,
COOPERATIVE, OR ASSOCIATION.
(B) FOR THE PURPOSES OF THIS ARTICLE, NO AUDITOR MAY BE COMMISSIONED
BY A DEVELOPER OR DEPLOYER OF AN AI SYSTEM IF SUCH ENTITY HAS ALREADY
BEEN COMMISSIONED TO PROVIDE ANY AUDITING OR NON-AUDITING SERVICE,
INCLUDING BUT NOT LIMITED TO FINANCIAL AUDITING, CYBERSECURITY AUDITING,
OR CONSULTING SERVICES OF ANY TYPE, TO THE COMMISSIONING COMPANY IN THE
PAST TWELVE MONTHS.
(C) FEES PAID TO AUDITORS MAY NOT BE CONTINGENT ON THE RESULT OF THE
AUDIT AND THE COMMISSIONING COMPANY SHALL NOT PROVIDE ANY INCENTIVES OR
BONUSES FOR A POSITIVE AUDIT RESULT.
8. THE ATTORNEY GENERAL MAY PROMULGATE FURTHER RULES TO ENSURE (A) THE
INDEPENDENCE OF AUDITORS UNDER THIS SECTION, AND (B) THAT TEAMS CONDUCT-
ING AUDITS INCORPORATE FEEDBACK FROM COMMUNITIES THAT MAY FORESEEABLY BE
THE SUBJECT OF ALGORITHMIC DISCRIMINATION WITH RESPECT TO THE AI SYSTEM
BEING AUDITED.
§ 88. HIGH-RISK AI SYSTEM REPORTING REQUIREMENTS. 1. EVERY DEVELOPER
AND DEPLOYER OF A HIGH-RISK AI SYSTEM SHALL COMPLY WITH THE REPORTING
REQUIREMENTS OF THIS SECTION. REGARDLESS OF FINAL FINDINGS, REPORTS
SHALL BE FILED WITH THE ATTORNEY GENERAL PRIOR TO DEPLOYMENT OF A HIGH-
RISK AI SYSTEM AND THEN ANNUALLY, OR AFTER EACH SUBSTANTIAL CHANGE TO
THE SYSTEM, WHICHEVER COMES FIRST.
2. TOGETHER WITH EACH REPORT REQUIRED TO BE FILED UNDER THIS SECTION,
DEVELOPERS AND DEPLOYERS SHALL FILE WITH THE ATTORNEY GENERAL A COPY OF
THE LAST COMPLETED INDEPENDENT AUDIT REQUIRED BY THIS ARTICLE AND A
LEGAL ATTESTATION THAT THE HIGH-RISK AI SYSTEM: (A) DOES NOT VIOLATE
ANY PROVISION OF THIS ARTICLE; OR (B) MAY VIOLATE OR DOES VIOLATE ONE OR
MORE PROVISIONS OF THIS ARTICLE, THAT THERE IS A PLAN OF REMEDIATION TO
BRING THE HIGH-RISK AI SYSTEM INTO COMPLIANCE WITH THIS ARTICLE, AND A
SUMMARY OF SUCH PLAN OF REMEDIATION.
3. DEVELOPERS OF HIGH-RISK AI SYSTEMS SHALL FILE WITH THE ATTORNEY
GENERAL A REPORT CONTAINING THE FOLLOWING:
(A) A DESCRIPTION OF THE SYSTEM INCLUDING:
(I) A DESCRIPTION OF THE SYSTEM'S SOFTWARE STACK;
(II) THE PURPOSE OF THE SYSTEM;
(III) THE SYSTEM'S USE TO END USERS; AND
(IV) REASONABLY FORESEEABLE USES OUTSIDE OF THE CURRENT OR INTENDED
USES;
(V) HOW THE SYSTEM SHOULD BE USED OR NOT USED;
(B) THE INTENDED OUTPUTS OF THE SYSTEM AND WHETHER THE OUTPUTS CAN BE
OR ARE OTHERWISE APPROPRIATE TO BE USED FOR ANY PURPOSE NOT PREVIOUSLY
ARTICULATED;
(C) THE METHODS FOR TRAINING OF THEIR MODELS INCLUDING:
(I) ANY PRE-PROCESSING STEPS TAKEN TO PREPARE DATASETS FOR THE TRAIN-
ING OF A MODEL UNDERLYING A HIGH-RISK AI SYSTEM;
(II) DATASHEETS COMPREHENSIVELY DESCRIBING THE DATASETS UPON WHICH
MODELS WERE TRAINED AND EVALUATED, HOW AND WHY DATASETS WERE COLLECTED,
HOW THAT TRAINING DATA WILL BE USED AND MAINTAINED GOING FORWARD THROUGH
THE DEVELOPMENT CYCLE; AND
S. 1169 8
(III) STEPS TAKEN TO ENSURE COMPLIANCE WITH PRIVACY, DATA PRIVACY,
DATA SECURITY, AND COPYRIGHT LAWS;
(D) DETAILED USE AND DATA MANAGEMENT POLICIES;
(E) ANY OTHER INFORMATION NECESSARY TO ALLOW THE DEPLOYER TO UNDER-
STAND THE OUTPUTS AND MONITOR THE SYSTEM FOR COMPLIANCE WITH THIS ARTI-
CLE;
(F) ANY OTHER INFORMATION NECESSARY TO ALLOW THE DEPLOYER TO COMPLY
WITH THE REQUIREMENTS OF SUBDIVISION FOUR OF THIS SECTION; AND
(G) FOR ANY HIGH-RISK AI SYSTEM THAT IS A SUBSTANTIAL FACTOR IN MAKING
A CONSEQUENTIAL DECISION:
(I) A DETAILED DESCRIPTION OF THE PROPOSED USES OF THE SYSTEM, INCLUD-
ING WHAT CONSEQUENTIAL DECISIONS THE SYSTEM WILL SUPPORT;
(II) A DETAILED DESCRIPTION OF THE SYSTEM'S CAPABILITIES AND ANY
DEVELOPER-IMPOSED LIMITATIONS, INCLUDING CAPABILITIES OUTSIDE OF ITS
INTENDED USE, WHEN THE SYSTEM SHOULD NOT BE USED, ANY SAFEGUARDS OR
GUARDRAILS IN PLACE TO PROTECT AGAINST UNINTENDED, INAPPROPRIATE, OR
DISALLOWED USES, AND TESTING OF ANY SUCH SAFEGUARDS OR GUARDRAILS;
(III) AN INTERNAL RISK ASSESSMENT INCLUDING DOCUMENTATION AND RESULTS
OF TESTING CONDUCTED TO IDENTIFY ALL REASONABLY FORESEEABLE RISKS
RELATED TO ALGORITHMIC DISCRIMINATION, ACCURACY AND RELIABILITY, PRIVACY
AND AUTONOMY, AND SAFETY AND SECURITY, AS WELL AS ACTIONS TAKEN TO
ADDRESS THOSE RISKS, AND SUBSEQUENT TESTING TO ASSESS THE EFFICACY OF
ACTIONS TAKEN TO ADDRESS RISKS; AND
(IV) WHETHER THE SYSTEM SHOULD BE MONITORED, AND IF SO, HOW SUCH
SYSTEM SHOULD BE MONITORED.
4. DEPLOYERS OF HIGH-RISK AI SYSTEMS SHALL FILE WITH THE ATTORNEY
GENERAL A REPORT CONTAINING THE FOLLOWING:
(A) A DESCRIPTION OF THE SYSTEM INCLUDING:
(I) A DESCRIPTION OF THE SYSTEM'S SOFTWARE STACK;
(II) THE PURPOSE OF THE SYSTEM;
(III) THE SYSTEM'S USE TO END USERS; AND
(IV) REASONABLY FORESEEABLE USES OUTSIDE OF THE CURRENT OR INTENDED
USES;
(B) THE INTENDED OUTPUTS OF THE SYSTEM AND WHETHER THE OUTPUTS CAN BE
OR ARE OTHERWISE APPROPRIATE TO BE USED FOR ANY PURPOSE NOT PREVIOUSLY
ARTICULATED;
(C) ASSESSMENT OF THE RELATIVE BENEFITS AND COSTS TO THE CONSUMER
GIVEN THE SYSTEM'S PURPOSE, CAPABILITIES, AND PROBABLE USE CASES;
(D) WHETHER THE DEPLOYER COLLECTS REVENUE OR PLANS TO COLLECT REVENUE
FROM USE OF THE HIGH-RISK AI SYSTEM, AND IF SO, HOW IT MONETIZES OR
PLANS TO MONETIZE USE OF THE SYSTEM; AND
(E) FOR ANY HIGH-RISK AI SYSTEM THAT IS A SUBSTANTIAL FACTOR IN MAKING
A CONSEQUENTIAL DECISION:
(I) A DETAILED DESCRIPTION OF THE PROPOSED USES OF THE SYSTEM, INCLUD-
ING WHAT CONSEQUENTIAL DECISIONS THE SYSTEM WILL SUPPORT;
(II) WHETHER THE SYSTEM IS DESIGNED TO MAKE CONSEQUENTIAL DECISIONS
ITSELF OR WHETHER AND HOW IT SUPPORTS CONSEQUENTIAL DECISIONS;
(III) A DETAILED DESCRIPTION OF THE SYSTEM'S CAPABILITIES AND ANY
DEPLOYER-IMPOSED LIMITATIONS, INCLUDING CAPABILITIES OUTSIDE OF ITS
INTENDED USE, WHEN THE SYSTEM SHOULD NOT BE USED, ANY SAFEGUARDS OR
GUARDRAILS IN PLACE TO PROTECT AGAINST UNINTENDED, INAPPROPRIATE, OR
DISALLOWED USES, AND TESTING OF ANY SUCH SAFEGUARDS OR GUARDRAILS;
(IV) AN ASSESSMENT OF THE RELATIVE BENEFITS AND COSTS TO THE CONSUMER
GIVEN THE SYSTEM'S PURPOSE, CAPABILITIES, AND PROBABLE USE CASES;
(V) AN INTERNAL RISK ASSESSMENT INCLUDING DOCUMENTATION AND RESULTS OF
TESTING CONDUCTED TO IDENTIFY ALL REASONABLY FORESEEABLE RISKS RELATED
S. 1169 9
TO ALGORITHMIC DISCRIMINATION, ACCURACY AND RELIABILITY, PRIVACY AND
AUTONOMY, AND SAFETY AND SECURITY, AS WELL AS ACTIONS TAKEN TO ADDRESS
THOSE RISKS, AND SUBSEQUENT TESTING TO ASSESS THE EFFICACY OF ACTIONS
TAKEN TO ADDRESS RISKS; AND
(VI) WHETHER THE SYSTEM SHOULD BE MONITORED, AND IF SO, HOW SUCH
SYSTEM SHOULD BE MONITORED.
5. THE ATTORNEY GENERAL SHALL:
(A) PROMULGATE RULES FOR A PROCESS WHEREBY DEVELOPERS AND DEPLOYERS
MAY REQUEST REDACTION OF PORTIONS OF REPORTS REQUIRED UNDER THIS SECTION
TO ENSURE THAT THEY ARE NOT REQUIRED TO DISCLOSE SENSITIVE AND PROTECTED
INFORMATION; AND
(B) MAINTAIN AN ONLINE DATABASE THAT IS ACCESSIBLE TO THE GENERAL
PUBLIC WITH REPORTS, REDACTED IN ACCORDANCE WITH THIS SUBDIVISION, AND
AUDITS REQUIRED BY THIS ARTICLE WHICH SHALL BE UPDATED BIANNUALLY.
6. FOR HIGH-RISK AI SYSTEMS WHICH ARE ALREADY IN DEPLOYMENT AT THE
TIME OF THE EFFECTIVE DATE OF THIS ARTICLE, DEVELOPERS AND DEPLOYERS
SHALL HAVE EIGHTEEN MONTHS FROM SUCH EFFECTIVE DATE TO COMPLETE AND FILE
THE REPORTS AND INDEPENDENT AUDIT REQUIRED BY THIS ARTICLE.
§ 89. RISK MANAGEMENT POLICY AND PROGRAM. 1. EACH DEVELOPER OR DEPLOY-
ER OF HIGH-RISK AI SYSTEMS SHALL PLAN, DOCUMENT, AND IMPLEMENT A RISK
MANAGEMENT POLICY AND PROGRAM TO GOVERN DEVELOPMENT OR DEPLOYMENT, AS
APPLICABLE, OF SUCH HIGH-RISK AI SYSTEM. THE RISK MANAGEMENT POLICY AND
PROGRAM SHALL SPECIFY AND INCORPORATE THE PRINCIPLES, PROCESSES, AND
PERSONNEL THAT THE DEPLOYER USES TO IDENTIFY, DOCUMENT, AND MITIGATE
KNOWN OR REASONABLY FORESEEABLE RISKS OF ALGORITHMIC DISCRIMINATION
COVERED UNDER SUBDIVISION ONE OF SECTION EIGHTY-SIX OF THIS ARTICLE. THE
RISK MANAGEMENT POLICY AND PROGRAM SHALL BE AN ITERATIVE PROCESS
PLANNED, IMPLEMENTED, AND REGULARLY AND SYSTEMATICALLY REVIEWED AND
UPDATED OVER THE LIFE CYCLE OF A HIGH-RISK AI SYSTEM, REQUIRING REGULAR,
SYSTEMATIC REVIEW AND UPDATES, INCLUDING UPDATES TO DOCUMENTATION. A
RISK MANAGEMENT POLICY AND PROGRAM IMPLEMENTED AND MAINTAINED PURSUANT
TO THIS SECTION SHALL BE REASONABLE CONSIDERING:
(A) THE GUIDANCE AND STANDARDS SET FORTH IN VERSION 1.0 OF THE "ARTI-
FICIAL INTELLIGENCE RISK MANAGEMENT FRAMEWORK" PUBLISHED BY THE NATIONAL
INSTITUTE OF STANDARDS AND TECHNOLOGY IN THE UNITED STATES DEPARTMENT OF
COMMERCE, OR THE LATEST VERSION OF THE "ARTIFICIAL INTELLIGENCE RISK
MANAGEMENT FRAMEWORK" PUBLISHED BY THE NATIONAL INSTITUTE OF STANDARDS
AND TECHNOLOGY IF, IN THE ATTORNEY GENERAL'S DISCRETION, THE LATEST
VERSION OF THE "ARTIFICIAL INTELLIGENCE RISK MANAGEMENT FRAMEWORK"
PUBLISHED BY THE NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY IN THE
UNITED STATES DEPARTMENT OF COMMERCE IS AT LEAST AS STRINGENT AS VERSION
1.0;
(B) THE SIZE AND COMPLEXITY OF THE DEVELOPER OR DEPLOYER;
(C) THE NATURE, SCOPE, AND INTENDED USES OF THE HIGH-RISK AI SYSTEM
DEVELOPED OR DEPLOYED; AND
(D) THE SENSITIVITY AND VOLUME OF DATA PROCESSED IN CONNECTION WITH
THE HIGH-RISK AI SYSTEM.
2. A RISK MANAGEMENT POLICY AND PROGRAM IMPLEMENTED PURSUANT TO SUBDI-
VISION ONE OF THIS SECTION MAY COVER MULTIPLE HIGH-RISK AI SYSTEMS
DEVELOPED BY THE SAME DEVELOPER OR DEPLOYED BY THE SAME DEPLOYER IF
SUFFICIENT.
3. THE ATTORNEY GENERAL MAY REQUIRE A DEVELOPER OR A DEPLOYER TO
DISCLOSE THE RISK MANAGEMENT POLICY AND PROGRAM IMPLEMENTED PURSUANT TO
SUBDIVISION ONE OF THIS SECTION IN A FORM AND MANNER PRESCRIBED BY THE
ATTORNEY GENERAL. THE ATTORNEY GENERAL MAY EVALUATE THE RISK MANAGEMENT
POLICY AND PROGRAM TO ENSURE COMPLIANCE WITH THIS SECTION.
S. 1169 10
§ 89-A. SOCIAL SCORING AI SYSTEMS PROHIBITED. NO PERSON, PARTNERSHIP,
ASSOCIATION OR CORPORATION SHALL DEVELOP, DEPLOY, USE, OR SELL AN AI
SYSTEM WHICH EVALUATES OR CLASSIFIES THE TRUSTWORTHINESS OF NATURAL
PERSONS OVER A CERTAIN PERIOD OF TIME BASED ON THEIR SOCIAL BEHAVIOR OR
KNOWN OR PREDICTED PERSONAL OR PERSONALITY CHARACTERISTICS, WITH THE
SOCIAL SCORE LEADING TO EITHER OR BOTH OF THE FOLLOWING:
1. DIFFERENTIAL TREATMENT OF CERTAIN NATURAL PERSONS OR WHOLE GROUPS
THEREOF IN SOCIAL CONTEXTS WHICH ARE UNRELATED TO THE CONTEXTS IN WHICH
THE DATA WAS ORIGINALLY GENERATED OR COLLECTED; OR
2. DIFFERENTIAL TREATMENT OF CERTAIN NATURAL PERSONS OR WHOLE GROUPS
THEREOF THAT IS UNJUSTIFIED OR DISPROPORTIONATE TO THEIR SOCIAL BEHAVIOR
OR ITS GRAVITY.
§ 89-B. ENFORCEMENT. 1. WHENEVER THERE SHALL BE A VIOLATION OF ANY
PROVISION OF THIS ARTICLE, AN APPLICATION MAY BE MADE BY THE ATTORNEY
GENERAL IN THE NAME OF THE PEOPLE OF THE STATE OF NEW YORK, TO THE
SUPREME COURT HAVING JURISDICTION BY A SPECIAL PROCEEDING TO ISSUE AN
INJUNCTION, AND UPON NOTICE TO THE RESPONDENT OF NOT LESS THAN TEN DAYS,
TO ENJOIN AND RESTRAIN THE CONTINUANCE OF SUCH VIOLATION; AND IF IT
SHALL APPEAR TO THE SATISFACTION OF THE COURT THAT THE RESPONDENT HAS,
IN FACT, VIOLATED THIS ARTICLE, AN INJUNCTION MAY BE ISSUED BY THE
COURT, ENJOINING AND RESTRAINING ANY FURTHER VIOLATIONS, WITHOUT REQUIR-
ING PROOF THAT ANY PERSON HAS, IN FACT, BEEN INJURED OR DAMAGED THEREBY.
IN ANY SUCH PROCEEDING, THE COURT MAY MAKE ALLOWANCES TO THE ATTORNEY
GENERAL AS PROVIDED IN PARAGRAPH SIX OF SUBDIVISION (A) OF SECTION
EIGHTY-THREE HUNDRED THREE OF THE CIVIL PRACTICE LAW AND RULES, AND
DIRECT RESTITUTION. WHENEVER THE COURT SHALL DETERMINE THAT A VIOLATION
OF THIS ARTICLE HAS OCCURRED, THE COURT MAY IMPOSE A CIVIL PENALTY OF
NOT MORE THAN TWENTY THOUSAND DOLLARS FOR EACH VIOLATION.
2. THERE SHALL BE A PRIVATE RIGHT OF ACTION BY PLENARY PROCEEDING FOR
ANY PERSON HARMED BY ANY VIOLATION OF THIS ARTICLE BY ANY NATURAL PERSON
OR ENTITY. THE COURT SHALL AWARD COMPENSATORY DAMAGES AND LEGAL FEES TO
THE PREVAILING PARTY.
3. IN EVALUATING ANY MOTION TO DISMISS A PLENARY PROCEEDING COMMENCED
PURSUANT TO SUBDIVISION TWO OF THIS SECTION, THE COURT SHALL PRESUME THE
SPECIFIED AI SYSTEM WAS CREATED AND/OR OPERATED IN VIOLATION OF A SPECI-
FIED LAW OR LAWS AND THAT SUCH VIOLATION CAUSED THE HARM OR HARMS
ALLEGED.
(A) A DEFENDANT CAN REBUT PRESUMPTIONS MADE PURSUANT TO THIS SUBDIVI-
SION THROUGH CLEAR AND CONVINCING EVIDENCE THAT THE SPECIFIED AI SYSTEM
DID NOT CAUSE THE HARM OR HARMS ALLEGED AND/OR DID NOT VIOLATE THE
ALLEGED LAW OR LAWS. AN ALGORITHMIC AUDIT CAN BE CONSIDERED AS EVIDENCE
IN REBUTTING SUCH PRESUMPTIONS, BUT THE MERE EXISTENCE OF SUCH AN AUDIT,
WITHOUT ADDITIONAL EVIDENCE, SHALL NOT BE CONSIDERED CLEAR AND CONVINC-
ING EVIDENCE.
(B) WHERE SUCH PRESUMPTIONS ARE NOT REBUTTED PURSUANT TO THIS SUBDIVI-
SION, THE ACTION SHALL NOT BE DISMISSED.
(C) WHERE SUCH PRESUMPTIONS ARE REBUTTED PURSUANT TO THIS SUBDIVISION,
A MOTION TO DISMISS AN ACTION SHALL BE ADJUDICATED WITHOUT ANY CONSIDER-
ATION OF THIS SECTION.
4. THE SUPREME COURT IN THE STATE SHALL HAVE JURISDICTION OVER ANY
ACTION, CLAIM, OR LAWSUIT TO ENFORCE THE PROVISIONS OF THIS ARTICLE.
§ 4. Section 296 of the executive law is amended by adding a new
subdivision 23 to read as follows:
23. IT SHALL BE AN UNLAWFUL DISCRIMINATORY PRACTICE UNDER THIS SECTION
FOR A DEPLOYER OR A DEVELOPER, AS SUCH TERMS ARE DEFINED IN SECTION
S. 1169 11
EIGHTY-FIVE OF THE CIVIL RIGHTS LAW, TO ENGAGE IN AN UNLAWFUL DISCRIMI-
NATORY PRACTICE UNDER SECTION EIGHTY-SIX OF THE CIVIL RIGHTS LAW.
§ 5. This act shall take effect immediately.