Our Resources
/
Blog
Other

Join PAI at NeurIPS 2019

Join PAI at NeurIPS 2019

$hero_image['alt']

Held on December 8-14, 2019 in Vancouver, Canada, the Neural Information Processing Systems (NeurIPS) Conference is the largest international AI gathering in the world. This year, PAI is involved in a number of workshops and events aligned with our mission of responsible AI, with a focus on promoting diversity and inclusion.

Ensuring the participation of researchers from around the world remains a challenge for international conferences, where the denial of entry visas is a persistent problem.  At this year’s NeurIPS, PAI is sponsoring workshops whose participants faced challenges with travel and entry visas  – Black in AI Workshop (December 9th) and ML for The Developing World (December 13th). This support is in line with PAI’s approach to visa accessibility, and reflects our belief that bringing together experts from different cultures, backgrounds, and perspectives is essential for AI/ML to flourish and help create the future we desire.  Diversity within our own talent recruiting pipeline is also a priority for PAI. We will be socializing available positions, including our newest Diversity & Inclusion Fellowship, at workshops and events.

From the presentation of new research to workshop engagement to hosted events – we invite NeurIPS attendees to join us in the activities detailed below. Come find us and pick up a new PAI sticker.

Monday, December 9

Black in AI Workshop
Black in AI is a place for sharing ideas, fostering collaborations and discussing initiatives to increase the presence of Black people in the field of Artificial Intelligence. PAI staff and colleagues will be supporting the Black in AI Workshop during the day and will host a booth to share our latest career opportunities. Come visit our booth to learn about the Partnership and discover our open roles.
8 a.m. – 5 p.m.  Eastern Building, Rooms 1,2,3.

Wednesday, December 11

Deepfake Detection Challengenge (Invite-Only Kickoff Event) 
Help address media manipulation and mark the beginning of the PAI AI and Media Integrity Steering Committee‘s first project, the Deepfake Detection Challenge. Hear from PAI’s program lead, Claire Leibowicz, Facebook, and other PAI partners in a panel discussing the steering committee’s oversight and governance of the challenge while socializing with other attendees involved in the work. PAI Partners may email Katherine Lewis to request an invitation to this exclusive event.
4 – 7 p.m.  Offsite Venue.

PAI Social – Find your Allies
How to be an individual champion of ethical AI practices at your company and find the collaborators you need
Share ideas, commiserate over mutual challenges, and meet future collaborators in pursuit of AI technology for the benefit of people and society. Hosted by PAI and the MacArthur Foundation, this NeurIPS social event will be a hybrid cocktail/mocktail mixer for people to discuss strategies in scoping, generating buy-in, and executing AI ethics projects within the NeurIPS community. RSVP via Eventbrite.
7 – 10 p.m. West Level 2, Rooms 202-204.

Friday, December 13

Human-Centric Machine Learning Workshop
PAI will be presenting three posters and a talk at this HCML workshop, which brings together experts from a diverse set of backgrounds to identify multidisciplinary approaches and best practices that maximize the societal benefits of Machine Learning, while minimizing risks.
8:30 a.m. – 5:15 p.m. West Level 2, Rooms 223-224

  • PAI research scientist, Alice Xiang, will present a poster and talk on the Legal Compatibility of Fairness Definitions, a paper co-authored with PAI research fellow, Deborah Raji, which addresses the tensions between legal and machine learning (ML) definitions of “fairness,” with a focus on commonly referenced terms related to U.S. anti-discrimination law.
  • PAI research fellow, Umang Bhatt, along with PAI research scientist, Alice Xiang, will present a poster on Explainable Machine Learning in Deployment. The study explores how organizations view and apply explainability, revealing the limitations of current explainability techniques that hamper their use for end-users, and introducing a framework for establishing clear goals for explainability.
  • PAI program lead, Jingying Yang, along with PAI research fellow, Deborah Raji, will present a poster on documentation as a promising intervention to operationalize the AI ethics principle of transparency, with ABOUT ML, PAI’s ongoing multi-stakeholder project on best practices for machine learning documentation, as an example of one large scale effort in this space.

Machine Learning for the Developing World (ML4D) Workshop 
ML4D highlights the specific challenges and risks of responsible and ethical machine learning deployment in the developing world. PAI’s research scientist Alice Xiang will describe PAI’s fairness, transparency, and accountability (FTA) research, highlighting how this work can inform understanding and implementation in developing regions.
Presentation Time 4:35 – 4:40 p.m. West Level 1, Rooms 121-122

Safety & Robustness in Decision Making Workshop
This all-day event focuses on the evolving need for decision-making systems and algorithms that can enable safe interaction and good performance in high stakes environments. PAI research scientist, Carroll Wainwright, and director of research, Peter Eckersley, will present SafeLife: a project designed to test the safety of reinforcement learning agents. In this publicly available reinforcement learning environment, agents are graded on and trained to maximize rewards, while operating safely and without unnecessary side effects. Avoiding negative side effects is a complex challenge – this project forms a baseline against which future safety-related research can be measured.
Poster Sessions – 10am – 11am and 3:20pm -4:30pm. East Ballroom A

Saturday, December 14

AI for Social Good Workshop
This workshop addresses the challenge of bridging theory and practice for AI ethics and “good” intelligent systems. PAI program lead, Jingying Yang, will discuss how our ABOUT ML initiative intends to bridge AI principles and practice. Because creating documentation at scale requires changes to organizational processes, workflows, resources, and buy-in, ABOUT ML’s recommendations can create a foundation of organizational infrastructure for implementing AI ethics principles, helping to close the gaps between principles and practice as well as research and industry. ABOUT ML’s eventual goal is to set new industry norms for documentation for machine learning systems.
Presentation Time: 2:20 – 2:40 p.m. Location: TBD

 

Throughout the conference, PAI staff will be attending sessions and meeting with members of our Partner community.

Be sure to follow us as we share notable updates on Twitter at @PartnershipAI.