Dataset Viewer
Auto-converted to Parquet Duplicate
date
timestamp[us]date
2023-05-05 00:00:00
2026-04-08 00:00:00
arxiv_id
stringlengths
10
10
title
stringlengths
6
202
authors
listlengths
1
3.3k
github
stringlengths
0
116
abstract
stringlengths
165
1.92k
2023-05-05T00:00:00
2305.03048
Personalize Segment Anything Model with One Shot
[ "Renrui Zhang", "Zhengkai Jiang", "Ziyu Guo", "Shilin Yan", "Junting Pan", "Hao Dong", "Peng Gao", "Hongsheng Li" ]
https://github.com/ZrrSkywalker/Personalize-SAM
Driven by large-data pre-training, Segment Anything Model (SAM) has been demonstrated as a powerful and promptable framework, revolutionizing the segmentation models. Despite the generality, customizing SAM for specific visual concepts without man-powered prompting is under explored, e.g., automatically segmenting your...
2023-05-05T00:00:00
2305.03043
Single-Shot Implicit Morphable Faces with Consistent Texture Parameterization
[ "Connor Z. Lin", "Koki Nagano", "Jan Kautz", "Eric R. Chan", "Umar Iqbal", "Leonidas Guibas", "Gordon Wetzstein", "Sameh Khamis" ]
There is a growing demand for the accessible creation of high-quality 3D avatars that are animatable and customizable. Although 3D morphable models provide intuitive control for editing and animation, and robustness for single-view face reconstruction, they cannot easily capture geometric and appearance details. Method...
2023-05-05T00:00:00
2305.02665
Learning Language-Specific Layers for Multilingual Machine Translation
[ "Telmo Pessoa Pires", "Robin M. Schmidt", "Yi-Hsiu Liao", "Stephan Peitz" ]
Multilingual Machine Translation promises to improve translation quality between non-English languages. This is advantageous for several reasons, namely lower latency (no need to translate twice), and reduced error cascades (e.g., avoiding losing gender and formality information when translating through English). On th...
2023-05-05T00:00:00
2305.02549
FormNetV2: Multimodal Graph Contrastive Learning for Form Document Information Extraction
[ "Chen-Yu Lee", "Chun-Liang Li", "Hao Zhang", "Timothy Dozat", "Vincent Perot", "Guolong Su", "Xiang Zhang", "Kihyuk Sohn", "Nikolai Glushnev", "Renshen Wang", "Joshua Ainslie", "Shangbang Long", "Siyang Qin", "Yasuhisa Fujii", "Nan Hua", "Tomas Pfister" ]
The recent advent of self-supervised pre-training techniques has led to a surge in the use of multimodal learning in form document understanding. However, existing approaches that extend the mask language modeling to other modalities require careful multi-task tuning, complex reconstruction target designs, or additiona...
2023-05-05T00:00:00
2305.02499
AutoML-GPT: Automatic Machine Learning with GPT
[ "Shujian Zhang", "Chengyue Gong", "Lemeng Wu", "Xingchao Liu", "Mingyuan Zhou" ]
AI tasks encompass a wide range of domains and fields. While numerous AI models have been designed for specific tasks and applications, they often require considerable human efforts in finding the right model architecture, optimization algorithm, and hyperparameters. Recent advances in large language models (LLMs) like...
2023-05-05T00:00:00
2305.03049
NeuralEditor: Editing Neural Radiance Fields via Manipulating Point Clouds
[ "Jun-Kun Chen", "Jipeng Lyu", "Yu-Xiong Wang" ]
This paper proposes NeuralEditor that enables neural radiance fields (NeRFs) natively editable for general shape editing tasks. Despite their impressive results on novel-view synthesis, it remains a fundamental challenge for NeRFs to edit the shape of the scene. Our key insight is to exploit the explicit point cloud re...
2023-05-05T00:00:00
2305.03040
TUVF: Learning Generalizable Texture UV Radiance Fields
[ "An-Chieh Cheng", "Xueting Li", "Sifei Liu", "Xiaolong Wang" ]
Textures are a vital aspect of creating visually appealing and realistic 3D models. In this paper, we study the problem of generating high-fidelity texture given shapes of 3D assets, which has been relatively less explored compared with generic 3D shape modeling. Our goal is to facilitate a controllable texture generat...
2023-05-05T00:00:00
2305.03027
NeRSemble: Multi-view Radiance Field Reconstruction of Human Heads
[ "Tobias Kirschstein", "Shenhan Qian", "Simon Giebenhain", "Tim Walter", "Matthias Nießner" ]
We focus on reconstructing high-fidelity radiance fields of human heads, capturing their animations over time, and synthesizing re-renderings from novel viewpoints at arbitrary time steps. To this end, we propose a new multi-view capture setup composed of 16 calibrated machine vision cameras that record time-synchroniz...
2023-05-05T00:00:00
2305.02968
Masked Trajectory Models for Prediction, Representation, and Control
[ "Philipp Wu", "Arjun Majumdar", "Kevin Stone", "Yixin Lin", "Igor Mordatch", "Pieter Abbeel", "Aravind Rajeswaran" ]
https://github.com/facebookresearch/mtm
We introduce Masked Trajectory Models (MTM) as a generic abstraction for sequential decision making. MTM takes a trajectory, such as a state-action sequence, and aims to reconstruct the trajectory conditioned on random subsets of the same trajectory. By training with a highly randomized masking pattern, MTM learns vers...
2023-05-05T00:00:00
2305.02678
Real-Time Neural Appearance Models
[ "Tizian Zeltner", "Fabrice Rousselle", "Andrea Weidlich", "Petrik Clarberg", "Jan NovÑk", "Benedikt Bitterli", "Alex Evans", "TomÑő Davidovič", "Simon Kallweit", "Aaron Lefohn" ]
We present a complete system for real-time rendering of scenes with complex appearance previously reserved for offline use. This is achieved with a combination of algorithmic and system level innovations. Our appearance model utilizes learned hierarchical textures that are interpreted using neural decoders, which pro...
2023-05-05T00:00:00
2305.03052
Tracking through Containers and Occluders in the Wild
[ "Basile Van Hoorick", "Pavel Tokmakov", "Simon Stent", "Jie Li", "Carl Vondrick" ]
Tracking objects with persistence in cluttered and dynamic environments remains a difficult challenge for computer vision systems. In this paper, we introduce TCOW, a new benchmark and model for visual tracking through heavy occlusion and containment. We set up a task where the goal is to, given a video sequence, segme...
2023-05-05T00:00:00
2305.02790
BranchNorm: Robustly Scaling Extremely Deep Transformers
[ "Yijin Liu", "Xianfeng Zeng", "Fandong Meng", "Jie Zhou" ]
Recently, DeepNorm scales Transformers into extremely deep (i.e., 1000 layers) and reveals the promising potential of deep scaling. To stabilize the training of deep models, DeepNorm (Wang et al., 2022) attempts to constrain the model update to a constant value. Although applying such a constraint can benefit the early...
2023-05-05T00:00:00
2305.02412
Plan, Eliminate, and Track -- Language Models are Good Teachers for Embodied Agents
[ "Yue Wu", "So Yeon Min", "Yonatan Bisk", "Ruslan Salakhutdinov", "Amos Azaria", "Yuanzhi Li", "Tom Mitchell", "Shrimai Prabhumoye" ]
Pre-trained large language models (LLMs) capture procedural knowledge about the world. Recent work has leveraged LLM's ability to generate abstract plans to simplify challenging control tasks, either by action scoring, or action modeling (fine-tuning). However, the transformer architecture inherits several constraints ...
2023-05-05T00:00:00
2305.02783
Automated Code generation for Information Technology Tasks in YAML through Large Language Models
[ "Saurabh Pujar", "Luca Buratti", "Xiaojie Guo", "Nicolas Dupuis", "Burn Lewis", "Sahil Suneja", "Atin Sood", "Ganesh Nalawade", "Matt Jones", "Alessandro Morari", "Ruchir Puri" ]
The recent improvement in code generation capabilities due to the use of large language models has mainly benefited general purpose programming languages. Domain specific languages, such as the ones used for IT Automation, have received far less attention, despite involving many active developers and being an essential...
2023-05-05T00:00:00
2305.02440
Cheaply Evaluating Inference Efficiency Metrics for Autoregressive Transformer APIs
[ "Deepak Narayanan", "Keshav Santhanam", "Peter Henderson", "Rishi Bommasani", "Tony Lee", "Percy Liang" ]
Large language models (LLMs) power many state-of-the-art systems in natural language processing. However, these models are extremely computationally expensive, even at inference time, raising the natural question: when is the extra cost of deploying a larger model worth the anticipated boost in capabilities? Better und...
2023-05-05T00:00:00
2305.03047
Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision
[ "Zhiqing Sun", "Yikang Shen", "Qinhong Zhou", "Hongxin Zhang", "Zhenfang Chen", "David Cox", "Yiming Yang", "Chuang Gan" ]
Recent AI-assistant agents, such as ChatGPT, predominantly rely on supervised fine-tuning (SFT) with human annotations and reinforcement learning from human feedback (RLHF) to align the output of large language models (LLMs) with human intentions, ensuring they are helpful, ethical, and reliable. However, this dependen...
2023-05-05T00:00:00
2305.02483
ChatGPT-steered Editing Instructor for Customization of Abstractive Summarization
[ "Wen Xiao", "Yujia Xie", "Giuseppe Carenini", "Pengcheng He" ]
Tailoring outputs of large language models, such as ChatGPT, to specific user needs remains a challenge despite their impressive generation quality. In this paper, we propose a tri-agent generation pipeline consisting of a generator, an instructor, and an editor to enhance the customization of generated outputs. The ge...
2023-05-05T00:00:00
2305.02463
Shap-E: Generating Conditional 3D Implicit Functions
[ "Heewoo Jun", "Alex Nichol" ]
https://github.com/openai/shap-e
We present Shap-E, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation, Shap-E directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. We train Shap-E in two stages:...
2023-05-08T00:00:00
2305.03111
Can LLM Already Serve as A Database Interface? A BIg Bench for Large-Scale Database Grounded Text-to-SQLs
[ "Jinyang Li", "Binyuan Hui", "Ge Qu", "Binhua Li", "Jiaxi Yang", "Bowen Li", "Bailin Wang", "Bowen Qin", "Rongyu Cao", "Ruiying Geng", "Nan Huo", "Chenhao Ma", "Kevin C. C. Chang", "Fei Huang", "Reynold Cheng", "Yongbin Li" ]
Text-to-SQL parsing, which aims at converting natural language instructions into executable SQLs, has gained increasing attention in recent years. In particular, Codex and ChatGPT have shown impressive results in this task. However, most of the prevalent benchmarks, i.e., Spider, and WikiSQL, focus on database schema w...
2023-05-08T00:00:00
2305.03726
Otter: A Multi-Modal Model with In-Context Instruction Tuning
[ "Bo Li", "Yuanhan Zhang", "Liangyu Chen", "Jinghao Wang", "Jingkang Yang", "Ziwei Liu" ]
Large language models (LLMs) have demonstrated significant universal capabilities as few/zero-shot learners in various tasks due to their pre-training on vast amounts of text data, as exemplified by GPT-3, which boosted to InstrctGPT and ChatGPT, effectively following natural language instructions to accomplish real-wo...
2023-05-08T00:00:00
2305.03695
Vera: A General-Purpose Plausibility Estimation Model for Commonsense Statements
[ "Jiacheng Liu", "Wenya Wang", "Dianzhuo Wang", "Noah A. Smith", "Yejin Choi", "Hannaneh Hajishirzi" ]
Despite the much discussed capabilities of today's language models, they are still prone to silly and unexpected commonsense failures. We consider a retrospective verification approach that reflects on the correctness of LM outputs, and introduce Vera, a general-purpose model that estimates the plausibility of declarat...
2023-05-08T00:00:00
2305.03713
Avatar Fingerprinting for Authorized Use of Synthetic Talking-Head Videos
[ "Ekta Prashnani", "Koki Nagano", "Shalini De Mello", "David Luebke", "Orazio Gallo" ]
Modern generators render talking-head videos with impressive levels of photorealism, ushering in new user experiences such as videoconferencing under constrained bandwidth budgets. Their safe adoption, however, requires a mechanism to verify if the rendered video is trustworthy. For instance, for videoconferencing we m...
2023-05-08T00:00:00
2305.03668
A Suite of Generative Tasks for Multi-Level Multimodal Webpage Understanding
[ "Andrea Burns", "Krishna Srinivasan", "Joshua Ainslie", "Geoff Brown", "Bryan A. Plummer", "Kate Saenko", "Jianmo Ni", "Mandy Guo" ]
Webpages have been a rich, scalable resource for vision-language and language only tasks. Yet only pieces of webpages are kept: image-caption pairs, long text articles, or raw HTML, never all in one place. Webpage tasks have resultingly received little attention and structured image-text data left underused. To study m...
2023-05-08T00:00:00
2305.03286
Composite Motion Learning with Task Control
[ "Pei Xu", "Xiumin Shang", "Victor Zordan", "Ioannis Karamouzas" ]
We present a deep learning method for composite and task-driven motion control for physically simulated characters. In contrast to existing data-driven approaches using reinforcement learning that imitate full-body motions, we learn decoupled motions for specific body parts from multiple reference motions simultaneousl...
2023-05-08T00:00:00
2305.03689
COLA: How to adapt vision-language models to Compose Objects Localized with Attributes?
[ "Arijit Ray", "Filip Radenovic", "Abhimanyu Dubey", "Bryan A. Plummer", "Ranjay Krishna", "Kate Saenko" ]
Compositional reasoning is a hallmark of human visual intelligence; yet despite the size of large vision-language models, they struggle to represent simple compositions by combining objects with their attributes. To measure this lack of compositional capability, we design Cola, a text-to-image retrieval benchmark to Co...
2023-05-08T00:00:00
2305.03210
AttentionViz: A Global View of Transformer Attention
[ "Catherine Yeh", "Yida Chen", "Aoyu Wu", "Cynthia Chen", "Fernanda ViΓ©gas", "Martin Wattenberg" ]
Transformer models are revolutionizing machine learning, but their inner workings remain mysterious. In this work, we present a new visualization technique designed to help researchers understand the self-attention mechanism in transformers that allows these models to learn rich, contextual relationships between elemen...
2023-05-08T00:00:00
2305.03719
Governance of the AI, by the AI, and for the AI
[ "Andrew W. Torrance", "Bill Tomlinson" ]
Over the past half century, there have been several false dawns during which the "arrival" of world-changing artificial intelligence (AI) has been heralded. Tempting fate, the authors believe the age of AI has, indeed, finally arrived. Powerful image generators, such as DALL-E2 and Midjourney have suddenly allowed anyo...
2023-05-08T00:00:00
2305.03514
Can Large Language Models Transform Computational Social Science?
[ "Caleb Ziems", "William Held", "Omar Shaikh", "Jiaao Chen", "Zhehao Zhang", "Diyi Yang" ]
Large Language Models (LLMs) like ChatGPT are capable of successfully performing many language processing tasks zero-shot (without the need for training data). If this capacity also applies to the coding of social phenomena like persuasiveness and political ideology, then LLMs could effectively transform Computational ...
2023-05-08T00:00:00
2305.03509
Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion
[ "Seongmin Lee", "Benjamin Hoover", "Hendrik Strobelt", "Zijie J. Wang", "ShengYun Peng", "Austin Wright", "Kevin Li", "Haekyu Park", "Haoyang Yang", "Duen Horng Chau" ]
Diffusion-based generative models' impressive ability to create convincing images has captured global attention. However, their complex internal structures and operations often make them difficult for non-experts to understand. We present Diffusion Explainer, the first interactive visualization tool that explains how S...
2023-05-09T00:00:00
2305.04091
Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
[ "Lei Wang", "Wanyu Xu", "Yihuai Lan", "Zhiqiang Hu", "Yunshi Lan", "Roy Ka-Wei Lee", "Ee-Peng Lim" ]
https://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting
Large language models (LLMs) have recently been shown to deliver impressive performance in various NLP tasks. To tackle multi-step reasoning tasks, few-shot chain-of-thought (CoT) prompting includes a few manually crafted step-by-step reasoning demonstrations which enable LLMs to explicitly generate reasoning steps and...
2023-05-09T00:00:00
2305.04789
AvatarReX: Real-time Expressive Full-body Avatars
[ "Zerong Zheng", "Xiaochen Zhao", "Hongwen Zhang", "Boning Liu", "Yebin Liu" ]
We present AvatarReX, a new method for learning NeRF-based full-body avatars from video data. The learnt avatar not only provides expressive control of the body, hands and the face together, but also supports real-time animation and rendering. To this end, we propose a compositional avatar representation, where the bod...
2023-05-09T00:00:00
2305.04268
Multi-Space Neural Radiance Fields
[ "Ze-Xin Yin", "Jiaxiong Qiu", "Ming-Ming Cheng", "Bo Ren" ]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects, often resulting in blurry or distorted rendering. Instead of calculating a single radiance field, we propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel...
2023-05-09T00:00:00
2305.04241
Vcc: Scaling Transformers to 128K Tokens or More by Prioritizing Important Tokens
[ "Zhanpeng Zeng", "Cole Hawkins", "Mingyi Hong", "Aston Zhang", "Nikolaos Pappas", "Vikas Singh", "Shuai Zheng" ]
Transformer models are foundational to natural language processing (NLP) and computer vision. Despite various recent works devoted to reducing the quadratic cost of such models (as a function of the sequence length n), dealing with ultra long sequences efficiently (e.g., with more than 16K tokens) remains challenging. ...
2023-05-09T00:00:00
2305.04790
MultiModal-GPT: A Vision and Language Model for Dialogue with Humans
[ "Tao Gong", "Chengqi Lyu", "Shilong Zhang", "Yudong Wang", "Miao Zheng", "Qian Zhao", "Kuikun Liu", "Wenwei Zhang", "Ping Luo", "Kai Chen" ]
https://github.com/open-mmlab/Multimodal-GPT
We present a vision and language model named MultiModal-GPT to conduct multi-round dialogue with humans. MultiModal-GPT can follow various instructions from humans, such as generating a detailed caption, counting the number of interested objects, and answering general questions from users. MultiModal-GPT is parameter-e...
2023-05-09T00:00:00
2305.04388
Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting
[ "Miles Turpin", "Julian Michael", "Ethan Perez", "Samuel R. Bowman" ]
Large Language Models (LLMs) can achieve strong performance on many tasks by producing step-by-step reasoning before giving a final output, often referred to as chain-of-thought reasoning (CoT). It is tempting to interpret these CoT explanations as the LLM's process for solving a task. However, we find that CoT explana...
2023-05-09T00:00:00
2305.03981
Pre-training Language Model as a Multi-perspective Course Learner
[ "Beiduo Chen", "Shaohan Huang", "Zihan Zhang", "Wu Guo", "Zhenhua Ling", "Haizhen Huang", "Furu Wei", "Weiwei Deng", "Qi Zhang" ]
ELECTRA, the generator-discriminator pre-training framework, has achieved impressive semantic construction capability among various downstream tasks. Despite the convincing performance, ELECTRA still faces the challenges of monotonous training and deficient interaction. Generator with only masked language modeling (MLM...
2023-05-09T00:00:00
2305.03937
Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization
[ "Anastasia Razdaibiedina", "Yuning Mao", "Rui Hou", "Madian Khabsa", "Mike Lewis", "Jimmy Ba", "Amjad Almahairi" ]
Prompt tuning is one of the successful approaches for parameter-efficient tuning of pre-trained language models. Despite being arguably the most parameter-efficient (tuned soft prompts constitute <0.1% of total parameters), it typically performs worse than other efficient tuning methods and is quite sensitive to hyper-...
2023-05-09T00:00:00
2305.04160
X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages
[ "Feilong Chen", "Minglun Han", "Haozhi Zhao", "Qingyang Zhang", "Jing Shi", "Shuang Xu", "Bo Xu" ]
Large language models (LLMs) have demonstrated remarkable language abilities. GPT-4, based on advanced LLMs, exhibits extraordinary multimodal capabilities beyond previous visual language models. We attribute this to the use of more advanced LLMs compared with previous multimodal models. Unfortunately, the model archit...
2023-05-09T00:00:00
2305.04745
Controllable Light Diffusion for Portraits
[ "David Futschik", "Kelvin Ritland", "James Vecore", "Sean Fanello", "Sergio Orts-Escolano", "Brian Curless", "Daniel SΓ½kora", "Rohit Pandey" ]
We introduce light diffusion, a novel method to improve lighting in portraits, softening harsh shadows and specular highlights while preserving overall scene illumination. Inspired by professional photographers' diffusers and scrims, our method softens lighting given only a single portrait photo. Previous portrait reli...
2023-05-09T00:00:00
2305.04461
Locally Attentional SDF Diffusion for Controllable 3D Shape Generation
[ "Xin-Yang Zheng", "Hao Pan", "Peng-Shuai Wang", "Xin Tong", "Yang Liu", "Heung-Yeung Shum" ]
Although the recent rapid evolution of 3D generative neural networks greatly improves 3D shape generation, it is still not convenient for ordinary users to create 3D shapes and control the local geometry of generated shapes. To address these challenges, we propose a diffusion-based 3D generation framework -- locally at...
2023-05-09T00:00:00
2305.04391
A Variational Perspective on Solving Inverse Problems with Diffusion Models
[ "Morteza Mardani", "Jiaming Song", "Jan Kautz", "Arash Vahdat" ]
Diffusion models have emerged as a key pillar of foundation models in visual domains. One of their critical applications is to universally solve different downstream inverse tasks via a single diffusion prior without re-training for each task. Most inverse tasks can be formulated as inferring a posterior distribution o...
2023-05-10T00:00:00
2305.05662
InternChat: Solving Vision-Centric Tasks by Interacting with Chatbots Beyond Language
[ "Zhaoyang Liu", "Yinan He", "Wenhai Wang", "Weiyun Wang", "Yi Wang", "Shoufa Chen", "Qinglong Zhang", "Yang Yang", "Qingyun Li", "Jiashuo Yu", "Kunchang Li", "Zhe Chen", "Xue Yang", "Xizhou Zhu", "Yali Wang", "Limin Wang", "Ping Luo", "Jifeng Dai", "Yu Qiao" ]
https://github.com/OpenGVLab/InternChat
We present an interactive visual framework named InternChat, or iChat for short. The framework integrates chatbots that have planning and reasoning capabilities, such as ChatGPT, with non-verbal instructions like pointing movements that enable users to directly manipulate images or videos on the screen. Pointing (inclu...
2023-05-10T00:00:00
2305.05591
AudioSlots: A slot-centric generative model for audio separation
[ "Pradyumna Reddy", "Scott Wisdom", "Klaus Greff", "John R. Hershey", "Thomas Kipf" ]
In a range of recent works, object-centric architectures have been shown to be suitable for unsupervised scene decomposition in the vision domain. Inspired by these methods we present AudioSlots, a slot-centric generative model for blind source separation in the audio domain. AudioSlots is built using permutation-equiv...
2023-05-10T00:00:00
2304.09355
To Compress or Not to Compress- Self-Supervised Learning and Information Theory: A Review
[ "Ravid Shwartz-Ziv", "Yann LeCun" ]
Deep neural networks have demonstrated remarkable performance in supervised learning tasks but require large amounts of labeled data. Self-supervised learning offers an alternative paradigm, enabling the model to learn from data without explicit labels. Information theory has been instrumental in understanding and opti...
2023-05-10T00:00:00
2305.05432
WikiWeb2M: A Page-Level Multimodal Wikipedia Dataset
[ "Andrea Burns", "Krishna Srinivasan", "Joshua Ainslie", "Geoff Brown", "Bryan A. Plummer", "Kate Saenko", "Jianmo Ni", "Mandy Guo" ]
Webpages have been a rich resource for language and vision-language tasks. Yet only pieces of webpages are kept: image-caption pairs, long text articles, or raw HTML, never all in one place. Webpage tasks have resultingly received little attention and structured image-text data underused. To study multimodal webpage un...
2023-05-10T00:00:00
2305.04966
NerfAcc: Efficient Sampling Accelerates NeRFs
[ "Ruilong Li", "Hang Gao", "Matthew Tancik", "Angjoo Kanazawa" ]
Optimizing and rendering Neural Radiance Fields is computationally expensive due to the vast number of samples required by volume rendering. Recent works have included alternative sampling approaches to help accelerate their methods, however, they are often not the focus of the work. In this paper, we investigate and c...
2023-05-10T00:00:00
2305.05065
Recommender Systems with Generative Retrieval
[ "Shashank Rajput", "Nikhil Mehta", "Anima Singh", "Raghunandan H. Keshavan", "Trung Vu", "Lukasz Heldt", "Lichan Hong", "Yi Tay", "Vinh Q. Tran", "Jonah Samost", "Maciej Kula", "Ed H. Chi", "Maheswaran Sathiamoorthy" ]
Modern recommender systems leverage large-scale retrieval models consisting of two stages: training a dual-encoder model to embed queries and candidates in the same space, followed by an Approximate Nearest Neighbor (ANN) search to select top candidates given a query's embedding. In this paper, we propose a new single-...
2023-05-10T00:00:00
2305.05658
TidyBot: Personalized Robot Assistance with Large Language Models
[ "Jimmy Wu", "Rika Antonova", "Adam Kan", "Marion Lepert", "Andy Zeng", "Shuran Song", "Jeannette Bohg", "Szymon Rusinkiewicz", "Thomas Funkhouser" ]
For a robot to personalize physical assistance effectively, it must learn user preferences that can be generally reapplied to future scenarios. In this work, we investigate personalization of household cleanup with robots that can tidy up rooms by picking up objects and putting them away. A key challenge is determining...
2023-05-10T00:00:00
2305.05383
Code Execution with Pre-trained Language Models
[ "Chenxiao Liu", "Shuai Lu", "Weizhu Chen", "Daxin Jiang", "Alexey Svyatkovskiy", "Shengyu Fu", "Neel Sundaresan", "Nan Duan" ]
Code execution is a fundamental aspect of programming language semantics that reflects the exact behavior of the code. However, most pre-trained models for code intelligence ignore the execution trace and only rely on source code and syntactic structures. In this paper, we investigate how well pre-trained models can un...
2023-05-10T00:00:00
2305.05364
Large Language Model Programs
[ "Imanol Schlag", "Sainbayar Sukhbaatar", "Asli Celikyilmaz", "Wen-tau Yih", "Jason Weston", "JΓΌrgen Schmidhuber", "Xian Li" ]
In recent years, large pre-trained language models (LLMs) have demonstrated the ability to follow instructions and perform novel tasks from a few examples. The possibility to parameterise an LLM through such in-context examples widens their capability at a much lower cost than finetuning. We extend this line of reasoni...
2023-05-10T00:00:00
2305.05644
Towards Building the Federated GPT: Federated Instruction Tuning
[ "Jianyi Zhang", "Saeed Vahidian", "Martin Kuo", "Chunyuan Li", "Ruiyi Zhang", "Guoyin Wang", "Yiran Chen" ]
While ``instruction-tuned" generative large language models (LLMs) have demonstrated an impressive ability to generalize to new tasks, the training phases heavily rely on large amounts of diverse and high-quality instruction data (such as ChatGPT and GPT-4). Unfortunately, acquiring high-quality data, especially when i...
2023-05-10T00:00:00
2305.05176
FrugalGPT: How to Use Large Language Models While Reducing Cost and Improving Performance
[ "Lingjiao Chen", "Matei Zaharia", "James Zou" ]
There is a rapidly growing number of large language models (LLMs) that users can query for a fee. We review the cost associated with querying popular LLM APIs, e.g. GPT-4, ChatGPT, J1-Jumbo, and find that these models have heterogeneous pricing structures, with fees that can differ by two orders of magnitude. In partic...
2023-05-10T00:00:00
2305.05189
SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with Large Language Models
[ "Shanshan Zhong", "Zhongzhan Huang", "Wushao Wen", "Jinghui Qin", "Liang Lin" ]
Diffusion models, which have emerged to become popular text-to-image generation models, can produce high-quality and content-rich images guided by textual prompts. However, there are limitations to semantic understanding and commonsense reasoning in existing models when the input prompts are concise narrative, resultin...
2023-05-11T00:00:00
2305.06161
StarCoder: may the source be with you!
[ "Raymond Li", "Loubna Ben Allal", "Yangtian Zi", "Niklas Muennighoff", "Denis Kocetkov", "Chenghao Mou", "Marc Marone", "Christopher Akiki", "Jia Li", "Jenny Chim", "Qian Liu", "Evgenii Zheltonozhskii", "Terry Yue Zhuo", "Thomas Wang", "Olivier Dehaene", "Mishig Davaadorj", "Joel Lam...
The BigCode community, an open-scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder and StarCoderBase: 15.5B parameter models with 8K context length, infilling capabilities and fast large-batch inference enabled by multi-query attention. Sta...
2023-05-11T00:00:00
2305.06355
VideoChat: Chat-Centric Video Understanding
[ "KunChang Li", "Yinan He", "Yi Wang", "Yizhuo Li", "Wenhai Wang", "Ping Luo", "Yali Wang", "Limin Wang", "Yu Qiao" ]
https://github.com/OpenGVLab/Ask-Anything
In this study, we initiate an exploration into video understanding by introducing VideoChat, an end-to-end chat-centric video understanding system. It integrates video foundation models and large language models via a learnable neural interface, excelling in spatiotemporal reasoning, event localization, and causal rela...
2023-05-11T00:00:00
2305.06356
HumanRF: High-Fidelity Neural Radiance Fields for Humans in Motion
[ "Mustafa Işık", "Martin Rünz", "Markos Georgopoulos", "Taras Khakhulin", "Jonathan Starck", "Lourdes Agapito", "Matthias Nießner" ]
Representing human performance at high-fidelity is an essential building block in diverse applications, such as film production, computer games or videoconferencing. To close the gap to production-level quality, we introduce HumanRF, a 4D dynamic neural scene representation that captures full-body appearance in motion ...
2023-05-11T00:00:00
2305.06351
Reconstructing Animatable Categories from Videos
[ "Gengshan Yang", "Chaoyang Wang", "N Dinesh Reddy", "Deva Ramanan" ]
Building animatable 3D models is challenging due to the need for 3D scans, laborious registration, and manual rigging, which are difficult to scale to arbitrary categories. Recently, differentiable rendering provides a pathway to obtain high-quality 3D models from monocular videos, but these are limited to rigid catego...
2023-05-11T00:00:00
2305.05706
DexArt: Benchmarking Generalizable Dexterous Manipulation with Articulated Objects
[ "Chen Bao", "Helin Xu", "Yuzhe Qin", "Xiaolong Wang" ]
To enable general-purpose robots, we will require the robot to operate daily articulated objects as humans do. Current robot manipulation has heavily relied on using a parallel gripper, which restricts the robot to a limited set of objects. On the other hand, operating with a multi-finger robot hand will allow better a...
2023-05-11T00:00:00
2305.06131
Generative AI meets 3D: A Survey on Text-to-3D in AIGC Era
[ "Chenghao Li", "Chaoning Zhang", "Atish Waghwase", "Lik-Hang Lee", "Francois Rameau", "Yang Yang", "Sung-Ho Bae", "Choong Seon Hong" ]
Generative AI (AIGC, a.k.a. AI generated content) has made remarkable progress in the past few years, among which text-guided content generation is the most practical one since it enables the interaction between human instruction and AIGC. Due to the development in text-to-image as well 3D modeling technologies (like N...
2023-05-11T00:00:00
2305.06324
Alternating Gradient Descent and Mixture-of-Experts for Integrated Multimodal Perception
[ "Hassan Akbari", "Dan Kondratyuk", "Yin Cui", "Rachel Hornung", "Huisheng Wang", "Hartwig Adam" ]
We present Integrated Multimodal Perception (IMP), a simple and scalable multimodal multi-task training and modeling approach. IMP integrates multimodal inputs including image, video, text, and audio into a single Transformer encoder with minimal modality-specific components. IMP makes use of a novel design that combin...
2023-05-11T00:00:00
2305.06218
Multi-Task End-to-End Training Improves Conversational Recommendation
[ "Naveen Ram", "Dima Kuzmin", "Ellie Ka In Chio", "Moustafa Farid Alzantot", "Santiago Ontanon", "Ambarish Jash", "Judith Yue Li" ]
In this paper, we analyze the performance of a multitask end-to-end transformer model on the task of conversational recommendations, which aim to provide recommendations based on a user's explicit preferences expressed in dialogue. While previous works in this area adopt complex multi-component approaches where the dia...
2023-05-11T00:00:00
2305.05973
Privacy-Preserving Recommender Systems with Synthetic Query Generation using Differentially Private Large Language Models
[ "Aldo Gael Carranza", "Rezsa Farahani", "Natalia Ponomareva", "Alex Kurakin", "Matthew Jagielski", "Milad Nasr" ]
We propose a novel approach for developing privacy-preserving large-scale recommender systems using differentially private (DP) large language models (LLMs) which overcomes certain challenges and limitations in DP training these complex systems. Our method is particularly well suited for the emerging area of LLM-based ...
2023-05-11T00:00:00
2305.05862
Are ChatGPT and GPT-4 General-Purpose Solvers for Financial Text Analytics? An Examination on Several Typical Tasks
[ "Xianzhi Li", "Xiaodan Zhu", "Zhiqiang Ma", "Xiaomo Liu", "Sameena Shah" ]
The most recent large language models such as ChatGPT and GPT-4 have garnered significant attention, as they are capable of generating high-quality responses to human input. Despite the extensive testing of ChatGPT and GPT-4 on generic text corpora, showcasing their impressive capabilities, a study focusing on financia...
2023-05-11T00:00:00
2305.05845
Sketching the Future (STF): Applying Conditional Control Techniques to Text-to-Video Models
[ "Rohan Dhesikan", "Vignesh Rajmohan" ]
The proliferation of video content demands efficient and flexible neural network based approaches for generating new video content. In this paper, we propose a novel approach that combines zero-shot text-to-video generation with ControlNet to improve the output of these models. Our method takes multiple sketched frames...
2023-05-11T00:00:00
2305.06077
Relightify: Relightable 3D Faces from a Single Image via Diffusion Models
[ "Foivos Paraperas Papantoniou", "Alexandros Lattas", "Stylianos Moschoglou", "Stefanos Zafeiriou" ]
Following the remarkable success of diffusion models on image generation, recent works have also demonstrated their impressive ability to address a number of inverse problems in an unsupervised way, by properly constraining the sampling process based on a conditioning input. Motivated by this, in this paper, we present...
2023-05-12T00:00:00
2305.06908
CoMoSpeech: One-Step Speech and Singing Voice Synthesis via Consistency Model
[ "Zhen Ye", "Wei Xue", "Xu Tan", "Jie Chen", "Qifeng Liu", "Yike Guo" ]
Denoising diffusion probabilistic models (DDPMs) have shown promising performance for speech synthesis. However, a large number of iterative steps are required to achieve high sample quality, which restricts the inference speed. Maintaining sample quality while increasing sampling speed has become a challenging task. I...
2023-05-12T00:00:00
2305.07017
An Inverse Scaling Law for CLIP Training
[ "Xianhang Li", "Zeyu Wang", "Cihang Xie" ]
https://github.com/UCSC-VLAA/CLIPA
CLIP, the first foundation model that connects images and text, has enabled many recent breakthroughs in computer vision. However, its associated training cost is prohibitively high, imposing a significant barrier to its widespread exploration. In this paper, we present a surprising finding that there exists an inverse...
2023-05-12T00:00:00
2305.07011
Region-Aware Pretraining for Open-Vocabulary Object Detection with Vision Transformers
[ "Dahun Kim", "Anelia Angelova", "Weicheng Kuo" ]
We present Region-aware Open-vocabulary Vision Transformers (RO-ViT) - a contrastive image-text pretraining recipe to bridge the gap between image-level pretraining and open-vocabulary object detection. At the pretraining phase, we propose to randomly crop and resize regions of positional embeddings instead of using th...
2023-05-12T00:00:00
2305.07015
Exploiting Diffusion Prior for Real-World Image Super-Resolution
[ "Jianyi Wang", "Zongsheng Yue", "Shangchen Zhou", "Kelvin C. K. Chan", "Chen Change Loy" ]
We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution (SR). Specifically, by employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model, thereby preserving the gen...
2023-05-12T00:00:00
2305.06456
Perpetual Humanoid Control for Real-time Simulated Avatars
[ "Zhengyi Luo", "Jinkun Cao", "Alexander Winkler", "Kris Kitani", "Weipeng Xu" ]
We present a physics-based humanoid controller that achieves high-fidelity motion imitation and fault-tolerant behavior in the presence of noisy input (e.g. pose estimates from video or generated from language) and unexpected falls. Our controller scales up to learning ten thousand motion clips without using any extern...
2023-05-12T00:00:00
2305.07027
EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention
[ "Xinyu Liu", "Houwen Peng", "Ningxin Zheng", "Yuqing Yang", "Han Hu", "Yixuan Yuan" ]
https://github.com/microsoft/Cream/tree/main/EfficientViT
Vision transformers have shown great success due to their high model capabilities. However, their remarkable performance is accompanied by heavy computation costs, which makes them unsuitable for real-time applications. In this paper, we propose a family of high-speed vision transformers named EfficientViT. We find tha...
2023-05-12T00:00:00
2305.07021
Simple Token-Level Confidence Improves Caption Correctness
[ "Suzanne Petryk", "Spencer Whitehead", "Joseph E. Gonzalez", "Trevor Darrell", "Anna Rohrbach", "Marcus Rohrbach" ]
The ability to judge whether a caption correctly describes an image is a critical part of vision-language understanding. However, state-of-the-art models often misinterpret the correctness of fine-grained details, leading to errors in outputs such as hallucinating objects in generated captions or poor compositional rea...
2023-05-12T00:00:00
2305.07004
Not All Languages Are Created Equal in LLMs: Improving Multilingual Capability by Cross-Lingual-Thought Prompting
[ "Haoyang Huang", "Tianyi Tang", "Dongdong Zhang", "Wayne Xin Zhao", "Ting Song", "Yan Xia", "Furu Wei" ]
Large language models (LLMs) demonstrate impressive multilingual capability, but their performance varies substantially across different languages. In this work, we introduce a simple yet effective method, called cross-lingual-thought prompting (XLT), to systematically improve the multilingual capability of LLMs. Speci...
2023-05-12T00:00:00
2305.06555
Domain Incremental Lifelong Learning in an Open World
[ "Yi Dai", "Hao Lang", "Yinhe Zheng", "Bowen Yu", "Fei Huang", "Yongbin Li" ]
https://github.com/AlibabaResearch/DAMO-ConvAI/tree/main/diana
Lifelong learning (LL) is an important ability for NLP models to learn new tasks continuously. Architecture-based approaches are reported to be effective implementations for LL models. However, it is non-trivial to extend previous approaches to domain incremental LL scenarios since they either require access to task id...
2023-05-12T00:00:00
2305.06500
InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning
[ "Wenliang Dai", "Junnan Li", "Dongxu Li", "Anthony Meng Huat Tiong", "Junqi Zhao", "Weisheng Wang", "Boyang Li", "Pascale Fung", "Steven Hoi" ]
https://github.com/salesforce/LAVIS/tree/main/projects
General-purpose language models that can solve various language-domain tasks have emerged driven by the pre-training and instruction-tuning pipeline. However, building general-purpose vision-language models is challenging due to the increased task discrepancy introduced by the additional visual input. Although vision-l...
2023-05-12T00:00:00
2305.06474
Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Prediction
[ "Wang-Cheng Kang", "Jianmo Ni", "Nikhil Mehta", "Maheswaran Sathiamoorthy", "Lichan Hong", "Ed Chi", "Derek Zhiyuan Cheng" ]
Large Language Models (LLMs) have demonstrated exceptional capabilities in generalizing to new tasks in a zero-shot or few-shot manner. However, the extent to which LLMs can comprehend user preferences based on their previous behavior remains an emerging and still unclear research question. Traditionally, Collaborative...
2023-05-12T00:00:00
2305.06404
LACoS-BLOOM: Low-rank Adaptation with Contrastive objective on 8 bits Siamese-BLOOM
[ "Wen-Yu Hua", "Brian Williams", "Davood Shamsi" ]
Text embeddings are useful features for several NLP applications, such as sentence similarity, text clustering, and semantic search. In this paper, we present a Low-rank Adaptation with a Contrastive objective on top of 8-bit Siamese-BLOOM, a multilingual large language model optimized to produce semantically meaningfu...
2023-05-12T00:00:00
2305.06575
Chain-of-Dictionary Prompting Elicits Translation in Large Language Models
[ "Hongyuan Lu", "Haoyang Huang", "Dongdong Zhang", "Haoran Yang", "Wai Lam", "Furu Wei" ]
Large language models (LLMs) have shown surprisingly good performance in multilingual neural machine translation (MNMT) even when trained without parallel data. Yet, despite the fact that the amount of training data is gigantic, they still struggle with translating rare words, particularly for low-resource languages. E...
2023-05-12T00:00:00
2305.06424
Bot or Human? Detecting ChatGPT Imposters with A Single Question
[ "Hong Wang", "Xuan Luo", "Weizhi Wang", "Xifeng Yan" ]
https://github.com/hongwang600/FLAIR
Large language models like ChatGPT have recently demonstrated impressive capabilities in natural language understanding and generation, enabling various applications including translation, essay writing, and chit-chatting. However, there is a concern that they can be misused for malicious purposes, such as fraud or den...
2023-05-12T00:00:00
2305.06594
V2Meow: Meowing to the Visual Beat via Music Generation
[ "Kun Su", "Judith Yue Li", "Qingqing Huang", "Dima Kuzmin", "Joonseok Lee", "Chris Donahue", "Fei Sha", "Aren Jansen", "Yu Wang", "Mauro Verzetti", "Timo I. Denk" ]
Generating high quality music that complements the visual content of a video is a challenging task. Most existing visual conditioned music generation systems generate symbolic music data, such as MIDI files, instead of raw audio waveform. Given the limited availability of symbolic music data, such methods can only gene...
2023-05-15T00:00:00
2305.07185
MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers
[ "Lili Yu", "DΓ‘niel Simig", "Colin Flaherty", "Armen Aghajanyan", "Luke Zettlemoyer", "Mike Lewis" ]
Autoregressive transformers are spectacular models for short sequences but scale poorly to long sequences such as high-resolution images, podcasts, code, or books. We proposed Megabyte, a multi-scale decoder architecture that enables end-to-end differentiable modeling of sequences of over one million bytes. Megabyte se...
2023-05-15T00:00:00
2305.07440
Optimizing Memory Mapping Using Deep Reinforcement Learning
[ "Pengming Wang", "Mikita Sazanovich", "Berkin Ilbeyi", "Phitchaya Mangpo Phothilimthana", "Manish Purohit", "Han Yang Tay", "NgΓ’n VΕ©", "Miaosen Wang", "Cosmin Paduraru", "Edouard Leurent", "Anton Zhernov", "Julian Schrittwieser", "Thomas Hubert", "Robert Tung", "Paula Kurylowicz", "Kie...
Resource scheduling and allocation is a critical component of many high impact systems ranging from congestion control to cloud computing. Finding more optimal solutions to these problems often has significant impact on resource and time savings, reducing device wear-and-tear, and even potentially improving carbon emis...
2023-05-15T00:00:00
2305.07153
Towards best practices in AGI safety and governance: A survey of expert opinion
[ "Jonas Schuett", "Noemi Dreksler", "Markus Anderljung", "David McCaffary", "Lennart Heim", "Emma Bluemke", "Ben Garfinkel" ]
A number of leading AI companies, including OpenAI, Google DeepMind, and Anthropic, have the stated goal of building artificial general intelligence (AGI) - AI systems that achieve or exceed human performance across a wide range of cognitive tasks. In pursuing this goal, they may develop and deploy AI systems that pose...
2023-05-15T00:00:00
2305.07558
Measuring Progress in Fine-grained Vision-and-Language Understanding
[ "Emanuele Bugliarello", "Laurent Sartran", "Aishwarya Agrawal", "Lisa Anne Hendricks", "Aida Nematzadeh" ]
While pretraining on large-scale image-text data from the Web has facilitated rapid progress on many vision-and-language (V&L) tasks, recent work has demonstrated that pretrained models lack "fine-grained" understanding, such as the ability to recognise relationships, verbs, and numbers in images. This has resulted in ...
2023-05-15T00:00:00
2305.07615
What are the Desired Characteristics of Calibration Sets? Identifying Correlates on Long Form Scientific Summarization
[ "Griffin Adams", "Bichlien H Nguyen", "Jake Smith", "Yingce Xia", "Shufang Xie", "Anna Ostropolets", "Budhaditya Deb", "Yuan-Jyue Chen", "Tristan Naumann", "NoΓ©mie Elhadad" ]
https://github.com/griff4692/calibrating-summaries
Summarization models often generate text that is poorly calibrated to quality metrics because they are trained to maximize the likelihood of a single reference (MLE). To address this, recent work has added a calibration step, which exposes a model to its own ranked outputs to improve relevance or, in a separate line of...
2023-05-15T00:00:00
2305.07514
BlendFields: Few-Shot Example-Driven Facial Modeling
[ "Kacper Kania", "Stephan J. Garbin", "Andrea Tagliasacchi", "Virginia Estellers", "Kwang Moo Yi", "Julien Valentin", "Tomasz TrzciΕ„ski", "Marek Kowalski" ]
Generating faithful visualizations of human faces requires capturing both coarse and fine-level details of the face geometry and appearance. Existing methods are either data-driven, requiring an extensive corpus of data not publicly accessible to the research community, or fail to capture fine details because they rely...
2023-05-15T00:00:00
2305.07447
Universal Source Separation with Weakly Labelled Data
[ "Qiuqiang Kong", "Ke Chen", "Haohe Liu", "Xingjian Du", "Taylor Berg-Kirkpatrick", "Shlomo Dubnov", "Mark D. Plumbley" ]
https://github.com/bytedance/uss
Universal source separation (USS) is a fundamental research task for computational auditory scene analysis, which aims to separate mono recordings into individual source tracks. There are three potential challenges awaiting the solution to the audio source separation task. First, previous audio source separation system...
2023-05-15T00:00:00
2305.07214
MMG-Ego4D: Multi-Modal Generalization in Egocentric Action Recognition
[ "Xinyu Gong", "Sreyas Mohan", "Naina Dhingra", "Jean-Charles Bazin", "Yilei Li", "Zhangyang Wang", "Rakesh Ranjan" ]
https://github.com/facebookresearch/MMG_Ego4D
In this paper, we study a novel problem in egocentric action recognition, which we term as "Multimodal Generalization" (MMG). MMG aims to study how systems can generalize when data from certain modalities is limited or even completely missing. We thoroughly investigate MMG in the context of standard supervised action r...
2023-05-15T00:00:00
2305.07378
Surfacing Biases in Large Language Models using Contrastive Input Decoding
[ "Gal Yona", "Or Honovich", "Itay Laish", "Roee Aharoni" ]
Ensuring that large language models (LMs) are fair, robust and useful requires an understanding of how different modifications to their inputs impact the model's behaviour. In the context of open-text generation tasks, however, such an evaluation is not trivial. For example, when introducing a model with an input text ...
2023-05-15T00:00:00
2305.07490
ArtGPT-4: Artistic Vision-Language Understanding with Adapter-enhanced MiniGPT-4
[ "Zhengqing Yuan", "Huiwen Xue", "Xinyi Wang", "Yongming Liu", "Zhuanzhe Zhao", "Kun Wang" ]
In recent years, large language models (LLMs) have made significant progress in natural language processing (NLP), with models like ChatGPT and GPT-4 achieving impressive capabilities in various linguistic tasks. However, training models on such a large scale is challenging, and finding datasets that match the model's ...
2023-05-15T00:00:00
2305.07243
Better speech synthesis through scaling
[ "James Betker" ]
https://github.com/neonbjb/tortoise-tts
In recent years, the field of image generation has been revolutionized by the application of autoregressive transformers and DDPMs. These approaches model the process of image generation as a step-wise probabilistic processes and leverage large amounts of compute and data to learn the image distribution. This methodolo...
2023-05-16T00:00:00
2305.07759
TinyStories: How Small Can Language Models Be and Still Speak Coherent English?
[ "Ronen Eldan", "Yuanzhi Li" ]
Language models (LMs) are powerful tools for natural language processing, but they often struggle to produce coherent and fluent text when they are small. Models with around 125M parameters such as GPT-Neo (small) or GPT-2 (small) can rarely generate coherent and consistent English text beyond a few words even after ex...
2023-05-16T00:00:00
2305.08596
DarkBERT: A Language Model for the Dark Side of the Internet
[ "Youngjin Jin", "Eugene Jang", "Jian Cui", "Jin-Woo Chung", "Yongjae Lee", "Seungwon Shin" ]
Recent research has suggested that there are clear differences in the language used in the Dark Web compared to that of the Surface Web. As studies on the Dark Web commonly require textual analysis of the domain, language models specific to the Dark Web may provide valuable insights to researchers. In this work, we int...
2023-05-16T00:00:00
2305.08810
AutoRecon: Automated 3D Object Discovery and Reconstruction
[ "Yuang Wang", "Xingyi He", "Sida Peng", "Haotong Lin", "Hujun Bao", "Xiaowei Zhou" ]
A fully automated object reconstruction pipeline is crucial for digital content creation. While the area of 3D reconstruction has witnessed profound developments, the removal of background to obtain a clean object model still relies on different forms of manual labor, such as bounding box labeling, mask annotations, an...
2023-05-16T00:00:00
2305.07961
Leveraging Large Language Models in Conversational Recommender Systems
[ "Luke Friedman", "Sameer Ahuja", "David Allen", "Terry Tan", "Hakim Sidahmed", "Changbo Long", "Jun Xie", "Gabriel Schubiner", "Ajay Patel", "Harsh Lara", "Brian Chu", "Zexi Chen", "Manoj Tiwari" ]
A Conversational Recommender System (CRS) offers increased transparency and control to users by enabling them to engage with the system through a real-time multi-turn dialogue. Recently, Large Language Models (LLMs) have exhibited an unprecedented ability to converse naturally and incorporate world knowledge and common...
2023-05-16T00:00:00
2305.07922
CodeT5+: Open Code Large Language Models for Code Understanding and Generation
[ "Yue Wang", "Hung Le", "Akhilesh Deepak Gotmare", "Nghi D. Q. Bui", "Junnan Li", "Steven C. H. Hoi" ]
Large language models (LLMs) pretrained on vast source code have achieved prominent progress in code intelligence. However, existing code LLMs have two main limitations in terms of architecture and pretraining tasks. First, they often adopt a specific architecture (encoder-only or decoder-only) or rely on a unified enc...
2023-05-16T00:00:00
2305.08675
Improved baselines for vision-language pre-training
[ "Enrico Fini", "Pietro Astolfi", "Adriana Romero-Soriano", "Jakob Verbeek", "Michal Drozdzal" ]
Contrastive learning has emerged as an efficient framework to learn multimodal representations. CLIP, a seminal work in this area, achieved impressive results by training on paired image-text data using the contrastive loss. Recent work claims improvements over CLIP using additional non-contrastive losses inspired from...
2023-05-16T00:00:00
2305.08809
Interpretability at Scale: Identifying Causal Mechanisms in Alpaca
[ "Zhengxuan Wu", "Atticus Geiger", "Christopher Potts", "Noah D. Goodman" ]
Obtaining human-interpretable explanations of large, general-purpose language models is an urgent goal for AI safety. However, it is just as important that our interpretability methods are faithful to the causal dynamics underlying model behavior and able to robustly generalize to unseen inputs. Distributed Alignment S...
2023-05-16T00:00:00
2305.08677
Natural Language Decomposition and Interpretation of Complex Utterances
[ "Harsh Jhamtani", "Hao Fang", "Patrick Xia", "Eran Levy", "Jacob Andreas", "Ben Van Durme" ]
Natural language interfaces often require supervised data to translate user requests into programs, database queries, or other structured intent representations. During data collection, it can be difficult to anticipate and formalize the full range of user needs -- for example, in a system designed to handle simple req...
2023-05-16T00:00:00
2305.08298
Symbol tuning improves in-context learning in language models
[ "Jerry Wei", "Le Hou", "Andrew Lampinen", "Xiangning Chen", "Da Huang", "Yi Tay", "Xinyun Chen", "Yifeng Lu", "Denny Zhou", "Tengyu Ma", "Quoc V. Le" ]
We present symbol tuning - finetuning language models on in-context input-label pairs where natural language labels (e.g., "positive/negative sentiment") are replaced with arbitrary symbols (e.g., "foo/bar"). Symbol tuning leverages the intuition that when a model cannot use instructions or natural language labels to f...
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
3,544

Spaces using hysts-bot-data/daily-papers 6