Intern
    Natural Language Processing

    Seminar NLP

    Seminar Natural Language Processing

    This seminar covers recent topics in Natural Language Processing. This term´s (summer semester 2023) seminar is for BSc and MSc students and focuses on Prompting and Human Feedback in Modern Natural Language Processing.

    Description

    In this seminar, you will:

    • Select one of the offered subtopics on prompting and  human feedback in NLP (see Subtopics).
    • Read, understand and explore scientific literature (start from the offered papers and expand to explore related work)
    • Organize the collected knowledge for a meaningful presentation about your topic (15 minutes + 5 minutes Q&A)
    • Summarize your topic in a concise report (6 - 8 pages + references)

    Subtopics

    Each student explores and presents one of the below subtopics. For each topic, one or more seminal papers are provided as a starting point for the seminal exploration. Your presentation and report are based on the provided papers and additional related work. Your goal is to clearly explain the subtopic, why it is relevant, and a short discussion of recent work and applications.

    Prompt-Tuning

    • Lester, B., Al-Rfou, R., & Constant, N. (2021, November). The Power of Scale for Parameter-Efficient Prompt Tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing  (pp. 3045-3059).
    • Liu, X., Ji, K., Fu, Y., Tam, W., Du, Z., Yang, Z., & Tang, J. (2022, May). P-tuning: Prompt tuning can be comparable to fine-tuning across scales and tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)  (pp. 61-68).

    Soft-prompting

    • Li, X. L., & Liang, P. (2021, August). Prefix-Tuning: Optimizing Continuous Prompts for Generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)  (pp. 4582-4597).
    • Qin, G., & Eisner, J. (2021, June). Learning How to Ask: Querying LMs with Mixtures of Soft Prompts. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)

    Human Feedback

    • Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., ... & Lowe, R. (2022). Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems , 35, 27730-27744.
    • Stiennon, N., Ouyang, L., Wu, J., Ziegler, D., Lowe, R., Voss, C., ... & Christiano, P. F. (2020). Learning to summarize with human feedback. Advances in Neural Information Processing Systems33, 3008-3021.

    Schedule

    • Register in the WueCampus2 by end of April
    • Attend the kickoff meeting at the beginning of May (first or second week)
    • Work individually on your topic and get feedback from your supervisor(s)
    • Give a meaningful presentation including a Q&A session, mid-July
    • Hand in your final report by the end of July