We are pleased to announce the fourth HUMANIZE workshop on Transparency and Explainability in Adaptive Systems through User Modeling Grounded in Psychological Theory will be held in conjunction with the 25th ACM International Conference on Intelligent User Interfaces (IUI) 2020 in Cagliari, Italy.

More information will soon follow. 

Welcome!

Welcome to the community website for the annual HUMANIZE workshop. It is held in conjunction with the ACM International Conference on Intelligent User Interfaces (IUI).

What is HUMANIZE about?

More and more systems are designed to be intelligent; By relying on data and the application of machine learning, these systems adapt themselves to match predicted or inferred user needs, preferences.
Observable, measurable, objective interaction behavior plays a central role in the design of these systems, in both the predictive modeling that provides intelligence (e.g., predicting what web pages a website visitor will visit based on their historic navigation behavior) and the evaluation (e.g., decide if a system performs well based on the extent that predictions are accurate and used correctly).

When designing more conventional systems (following approaches such as user-centered design or design thinking), designers rely on latent user characteristics (such as beliefs and attitudes, proficiency levels, expertise, personality) aside from objective, observable behavior. By relying on qualitative studies (e.g., observations, focus groups, interviews) they consider not only user characteristics or behavior in isolation, but also the relationship among them. This combination provides valuable information on how to design the systems.

HUMANIZE aims to investigate the potential of combining the quantitative, data-driven approaches with the qualitative, theory-driven approaches. We solicit work from researchers that incorporate variables grounded in psychological theory into their adaptive/intelligent systems. These variables allow for designing adaptive systems from a more user-centered approach in terms of requirements or needs based on user characteristics rather than solely interaction behavior, which allows for:

  • Explainability: Any adaptive system that relies solely on the interaction behavior data can be explained in terms of expectations, perceptions, variables and models used from theory and define the users as entities, their thinking and feeling, while undertaking purposeful actions (and reactions) regarding e.g., learning, reasoning, problem solving, decision making.
  • Fairness: Any adaptive system that considers a human-centred model in its core may consider and respect the individual differences, enabling the design and creation of environments, interventions and AI algorithms that are ethical, open to diversity, policies and legal challenges, and treating all users with fairness regarding their skills and unique characteristics.
  • Transparency: Any adaptive system that utilizes the full potential of its human-centred model in terms of definition and impact on decisions made by AI algorithms may facilitate the visibility and transparency of the subsequent actions bringing the control back to the users, for regulating, monitoring and understanding an adaptive outcome that directly affects them.
  • Bias: Any adaptive system’s AI algorithms and adaptive processes which are designed and developed considering human-centred model characteristics, the impact and relationships of subsequent variables, may facilitate informed interpretations and unveil possible bias decisions, actions and operations of users during their multi-purpose interactions.

HUMANIZE provides scholars and practitioners in the field of personalized user interfaces with a venue to discuss and explore the potential of incorporating variables derived from psychological theory in adaptive systems to increase explainability, fairness, and transparency, and decrease bias during interactions.