Pre-Workshop Videos & Activities

This workshop consists of a small amount of instruction so that most of the scheduled time can be devoted to interactive practice with various Generative AI (GenAI) tools. This allows participants to learn to use key features of these GenAI tools for research purposes and indirectly for general productivity. Some of the GenAI tools included in the active learning portion of the workshop are:

To be ready to jump right into the interactive activities during the workshop, please watch these short videos beforehand:

Tips for Creating Prompts (based on Coursera’s Google AI Essentials Course)

This tool is an example of a Large Learning Model (LLM), which describes AI models trained on large amounts of text to identify patterns between words, concepts, and phrases so that it can generate responses to user-generated prompts. Since LLMs depend on user inputs, a well-crafted prompt follows a framework: Task, Context, References, Evaluate, and Iterate.

  1. Specify the Task Clearly define what you want the AI to do, including: Task: The specific action (e.g., writing an email, creating an image, summarizing a document). Persona (if applicable): The expertise or perspective the AI should adopt. Format (if applicable): How you want the output structured (e.g., bulleted list).

  2. Provide Necessary Context Include details that help refine the AI’s focus, such as: Objectives for the task. Rules or guidelines. Background information relevant to the request.

  3. Include References (if applicable) Provide examples or resources that illustrate the desired style, tone, and format. Use 2-5 high-quality examples.

  4. Evaluate Your Output Assess the quality of the AI-generated content based on: Accuracy Bias Relevance Consistency Recognize that AI-generated content is a starting point, not a final product.

  5. Iterate for Better Results Refine your prompt based on the AI’s output. This iterative process involves: Providing an initial prompt. Evaluating the response. Adjusting the prompt based on effectiveness. Effective prompting is about continuously improving your approach until the desired output is achieved. AI can be a useful tool for increasing efficiency in your research. However, it is important to be aware of limitations and risks associated with using AI technology as described below (from Coursera’s Google AI Essentials Course).

Harms, Biases, and Risks Associated with AI (from Coursera’s Google AI Essentials Course)

AI can produce biased outputs due to data biases, leading to various harms:

  • Allocative Harm: Withholding opportunities or resources, such as an AI denying a tenant application based on inaccurate risk assessments.

  • Quality-of-Service Harm: Reduced performance for certain groups, exemplified by speech-recognition tech struggling with diverse speech patterns.

  • Representational Harm: Reinforcement of stereotypes, like translation tools skewing gender in outputs based on input words.

  • Social System Harm: Macro-level inequalities exacerbated by AI, such as harmful deepfakes.

  • Interpersonal Harm: Technology misuse affecting personal relationships and individual agency, like pranks facilitated by smart devices.

  • Drift and Knowledge Cutoff in AI: Drift refers to declining accuracy in AI predictions over time due to outdated training data, while knowledge cutoff indicates the last point when the model was trained. For instance, a model trained on 2015 data may not reflect current trends. Regular monitoring and updates are crucial to maintain AI reliability, as biases in new data and changes in societal behavior can contribute to drift.

NEXT STEP: Introduction to Hands-On Activities