Awesome Adapter Resources

This repository collects important tools and papers related to adapter methods for recent large pre-trained neural networks.

Adapters (aka Parameter-Efficient Transfer Learning (PETL) or Parameter-Efficient Fine-Tuning (PEFT) methods) include various parameter-efficient approaches of adapting large pre-trained models to new tasks.

Content

Why Adapters?

Large pre-trained (Transformer-based) models have become the foundation of various ML domains in recent years. While the most prevalent method of adapting these models to new tasks involves costly full fine-tuning of all model parameters, a series of parameter-efficient and lightweight alternatives, adapters, have been established in recent time.

Using adapters provides multiple benefits. They are …

Frameworks and Tools

Surveys

Natural Language Processing

Methods

Composition Methods

Analysis and Evaluation

Applications

Serving

Computer Vision

Methods

Audio Processing

Applications

Multi-Modal

Methods

Contributing

Contributions of new awesome adapter-related resources are very welcome! Before contributing, make sure to read this repository’s contributing guide.

Acknowledgments

Paper metadata is partially retrieved via Semantic Scholar’s API. Paper TLDRs are provided by Semantic Scholar’s TLDR feature.