• Conference Schedule

    November 12, 2020 - All times in EST

  • Imagine a future where you don’t have to change your voice to interact with Siri. It’s possible, but we have to start considering how language perpetuates oppression and expand our view on the “right voice” for our technology. In Why Does Siri Sound White? we’ll explore how coded language verbally and programmatically reproduce and reinforce bias that show up in current and emerging voice technologies.

  • Artificial Intelligence is developing at a breakneck pace, with each week bringing new innovations. As UX designers, the advancement of this technology will drastically reshape our careers, changing the contours of what it means to be a UX Designer. It will change the nature of what we make and how we make it. If we are not ready, we run the risk of being left behind. How can we be prepared? What should we be looking out for?

  • Ethics discussions abound, but translating “do no harm” into our work is frustrating at best, and obfuscatory at worst. We can agree that keeping humans safe and in control is important, but implementing ethics is intimidating work.
    Learn how to wield your preferred technology ethics code to make an AI system that is accountable, de-risked, respectful, secure, honest and usable. The presenter will introduce the topic of ethics and then step through a user experience (UX) framework to guide AI development teams successfully through this process.
    This talk is for UX leaders and development teams working on (or anticipating working on) AI systems. Attendees do not have to have any previous experience or knowledge about ethics, but will ideally have awareness of AI systems. Attendees who are generally interested in ethics will also find this session interesting.
    A list of audience takeaways
    - Gain more awareness of ethics and the impacts of AI systems
    - Leave with an actionable checklist that they can begin using immediately to implement trustable AI systems
    - Become evangelists for ethics in their organizations

  • Conversation is our primary mode of interacting with one another, and increasingly, with computers. When we enter into the duet of conversation, we accept responsibility for keeping the interaction on track and expect the same of our partner, whether they’re human or automated. Today, our automated conversation partners often fail to meet our expectations by hearing what we say, but not understanding what we mean.
    We can create more satisfying and intuitive conversations by designing them as fundamentally shared experiences that partners compose together. Conversations are forged on a set of cooperative principles for speaking and listening that allow conversation to achieve its communicative and emotional purposes. When we design interactions that conform to these shared expectations, we enable users to respond naturally and comfortably.
    Alexa, Siri, and Google Assistant are part of everyday conversations in one-third of U.S. households today and are projected to reach three quarters of households within the next three years. Designers have the opportunity to forge genuine connection with users by creating coherent and trustworthy partners who cooperate in the duet of conversation.
    Audience Takeaways
    - An understanding of the cooperative principle and why it should be the driving force behind conversation design
    - The limitations of conversational technologies and how to design graceful interactions in spite of them
    - The importance of setting and context for designing conversations
    - The role of user expectations for conversational interactions and the danger of designing interactions that fail to meet them

  • All around the world, companies are deploying digital products that intentionally and systematically seek to change their users’ behavior: and they are doing so at massive scale. Some of these efforts are laudable, but others are far less so.
    Behind many of these efforts lies behavioral science – an interdisciplinary study of human behavior, combining economics and psychology to better understand why people take seemingly irrational choices and actions. It uses that understanding to help change behavior: whether that is in people's daily lives, or in how they use a product or service.
    In this talk, you’ll get a practical introduction to how companies are applying behavioral science at scale, how you can do so in your work. You’ll learn about the ethical challenges they are confronting, and ways to safeguard that these techniques are used appropriately. You’ll also discover how behavioral science and experimental methods complement big data approaches, and how they are increasingly intertwined to help understand, and shape, behavior.
    Key takeaways:
    - How to apply behavioral science in UX and product development, and how other practitioners are already doing so
    - How to combine the wealth of techniques available to you, from behavioral science, to machine learning, to in depth qualitative user research, to gain a more wholistic picture of your users
    - How we can apply behavioral science to ourselves – especially to address algorithmic and design biases, and follow an ethical behavior change path

  • Designing user experiences for machine learning (ML) and other emerging technologies can feel ephemeral; users often see ML as a black box where they have little visibility or control. Like other forms of automation, users need to trust that ML features will behave as expected, and that they can override it if needed.
    Last year, PagerDuty launched Intelligent Alert Grouping, a feature that predicts how to group alerts in real time for software developers so they can understand the types of problems happening on their systems. While intelligent alert grouping is more efficient and flexible than alternatives like hard-coded rules, users hesitated to try the new technology because they feared it might make mistakes.
    In this talk, I’ll share my experience designing interactions that helped users preview and experiment with this new ML feature, how our team approached user research and design, and the lessons we learned about how users gain trust in unfamiliar technology.
    - Ideas for how to apply tactics like journey mapping and prototyping in new ways that address product experiences that change over time
    - Examples of ways to provide more transparency and control for experiences with ML products that can often be invisible
    - Strategies for explainability, both inside and outside the product, through effective metaphors and collaboration with user-facing teams
    - If you’re curious about how to tackle the unique UX challenges for these new technologies, or interested in the unexpected ways users want to experiment with them, this talk is for you