The discovery of new materials is crucial to addressing pressing global challenges such as climate change and advancements in next-generation computing. However, existing computational and ...
Alignment with human preferences has led to significant progress in producing honest, safe, and useful responses from Large Language Models (LLMs). Through this alignment process, the models are ...
Proteins, vital macromolecules, are characterized by their amino acid sequences, which dictate their three-dimensional structures and functions in living organisms. Effective generative protein ...
Retrieval-Augmented Generation (RAG) is a growing area of research focused on improving the capabilities of large language models (LLMs) by incorporating external knowledge sources. This approach ...
Large Language Models (LLMs) have gained significant attention in data management, with applications spanning data integration, database tuning, query optimization, and data cleaning. However, ...
The rapid progress of text-to-image (T2I) diffusion models has made it possible to generate highly detailed and accurate images from text inputs. However, as the length of the input text increases, ...
One of the most critical challenges of LLMs is how to align these models with human values and preferences, especially in generated texts. Most generated text outputs by models are inaccurate, biased, ...
In the rapidly evolving world of AI, challenges related to scalability, performance, and accessibility remain central to the efforts of research communities and open-source advocates. Issues such as ...
The growing reliance on large language models for coding support poses a significant problem: how best to assess real-world impact on programmer productivity? Current approaches, such as static ...
As large language models (LLMs) become increasingly capable and better day by day, their safety has become a critical topic for research. To create a safe model, model providers usually pre-define a ...
The rapid growth of large language models (LLMs) has brought impressive capabilities, but it has also highlighted significant challenges related to resource consumption and scalability. LLMs often ...
Large language models (LLMs) have demonstrated consistent scaling laws, revealing a power-law relationship between pretraining performance and computational resources. This relationship, expressed as ...