Skip to main content

Publications​

2024​

  • ICRA Under Review

    CANVAS: Commonsense-Aware Navigation System for Intuitive Human-Robot Interaction

    Suhwan Choi*, Yongjun Cho*, Minchan Kim*, Jaeyoon Jung*, Myunchul Joe, Yubeen Park, Minseo Kim, Sungwoong Kim, Sungjae Lee, Hwiseong Park, Jiwan Chung, and Youngjae Yu

    Real-life robot navigation involves more than just reaching a destination; it requires optimizing movements while addressing scenario-specific goals. An intuitive way for humans to express these goals is through abstract cues like verbal commands or rough sketches. Such human guidance may lack details or be noisy. Nonetheless, we expect robots to navigate as intended. For robots to interpret and execute these abstract instructions in line with human expectations, they must share a common understanding of basic navigation concepts with humans. To this end, we introduce CANVAS, a novel framework that combines visual and linguistic instructions for commonsense-aware navigation. Its success is driven by imitation learning, enabling the robot to learn from human navigation behavior. We present COMMAND, a comprehensive dataset with human-annotated navigation results, spanning over 48 hours and 219 km, designed to train commonsense-aware navigation systems in simulated environments. Our experiments show that CANVAS outperforms the strong rule-based system ROS NavStack across all environments, demonstrating superior performance with noisy instructions. Notably, in the orchard environment, where ROS NavStack records a 0% total success rate, CANVAS achieves a total success rate of 67%. CANVAS also closely aligns with human demonstrations and commonsense constraints, even in unseen environments. Furthermore, real-world deployment of CANVAS showcases impressive Sim2Real transfer with a total success rate of 69%, highlighting the potential of learning from human demonstrations in simulated environments for real-world applications.

    Github

    Demo

  • NeurIPS Workshop OWA (Oral)

    Integrating Visual and Linguistic Instructions for Context-Aware Navigation Agents

    Suhwan Choi*, Yongjun Cho*, Minchan Kim*, Jaeyoon Jung*, Myunchul Joe, Yubeen Park, Minseo Kim, Sungwoong Kim, Sungjae Lee, Hwiseong Park, Jiwan Chung, and Youngjae Yu

    Real-life robot navigation involves more than simply reaching a destination; it requires optimizing movements while considering scenario-specific goals. Humans often express these goals through abstract cues, such as verbal commands or rough sketches. While this guidance may be vague or noisy, we still expect robots to navigate as intended. For robots to interpret and execute these abstract instructions in line with human expectations, they need to share a basic understanding of navigation concepts with humans. To address this challenge, we introduce CANVAS, a novel framework that integrates both visual and linguistic instructions for commonsense-aware navigation. CANVAS leverages imitation learning, enabling robots to learn from human navigation behavior. We also present COMMAND, a comprehensive dataset that includes human-annotated navigation results spanning over 48 hours and 219 kilometers, specifically designed to train commonsense-aware navigation systems in simulated environments. Our experiments demonstrate that CANVAS outperforms the strong rule-based ROS NavStack system across all environments, excelling even with noisy instructions. In particular, in the orchard environment where ROS NavStack achieved a 0% success rate, CANVAS reached a 67% success rate. CANVAS also closely aligns with human demonstrations and commonsense constraints, even in unseen environments. Moreover, real-world deployment of CANVAS shows impressive Sim2Real transfer, with a total success rate of 69%, highlighting the potential of learning from human demonstrations in simulated environments for real-world applications.

    Github

    Demo

  • EMNLP Workshop FEVER (Oral)

    HerO at AVeriTeC: The Herd of Open Large Language Models for Verifying Real-World Claims

    Yejun Yoon*, Jaeyoon Jung*, Seunghyun Yoon, and Kunwoo Park

    To tackle the AVeriTeC shared task hosted by the FEVER-24, we introduce a system that only employs publicly available large language models (LLMs) for each step of automated fact-checking, dubbed the Herd of Open LLMs for verifying real-world claims (HerO). HerO employs multiple LLMs for each step of automated fact-checking. For evidence retrieval, a language model is used to enhance a query by generating hypothetical fact-checking documents. We prompt pretrained and fine-tuned LLMs for question generation and veracity prediction by crafting prompts with retrieved in-context samples. HerO achieved 2nd place on the leaderboard with the AVeriTeC score of 0.57, suggesting the potential of open LLMs for verifying real-world claims. For future research, we make our code publicly available at https://github.com/ssu-humane/HerO.

    Github

  • DCASE2024 Workshop

    EnCLAP++: Analyzing the EnCLAP Framework for Optimizing Automated Audio Captioning Performance

    Jaeyeon Kim*, Minjeong Jeon, Jaeyoon Jung, Jinjoo Lee, and Sang Hoon Woo

    In this work, we aim to analyze and optimize the EnCLAP framework, a state-of-the-art model in automated audio captioning. We investigate the impact of modifying the acoustic encoder components, explore pretraining with different dataset scales, and study the effectiveness of a reranking scheme. Through extensive experimentation and quantitative analysis of generated captions, we develop EnCLAP++, an enhanced version that significantly surpasses the original.

  • Interspeech

    JenGAN: Stacked Shifted Filters in GAN-Based Speech Synthesis

    Hyunjae Cho*, Junhyeok Lee*, and Wonbin Jung

    Non-autoregressive GAN-based neural vocoders are widely used due to their fast inference speed and high perceptual quality. However, they often suffer from audible artifacts such as tonal artifacts in their generated results. Therefore, we propose JenGAN, a new training strategy that involves stacking shifted low-pass filters to ensure the shift-equivariant property. This method helps prevent aliasing and reduce artifacts while preserving the model structure used during inference. In our experimental evaluation, JenGAN consistently enhances the performance of vocoder models, yielding significantly superior scores across the majority of evaluation metrics.

  • ICASSP

    EnCLAP: Combining Neural Audio Codec and Audio-Text Joint Embedding for Automated Audio Captioning

    Jaeyeon Kim*, Jaeyoon Jung, Jinjoo Lee, and Sang Hoon Woo

    We propose EnCLAP, a novel framework for automated audio captioning. EnCLAP employs two acoustic representation models, EnCodec and CLAP, along with a pretrained language model, BART. We also introduce a new training objective called masked codec modeling that improves acoustic awareness of the pretrained language model. Experimental results on AudioCaps and Clotho demonstrate that our model surpasses the performance of baseline models. Source code will be available at https://github.com/jaeyeonkim99/EnCLAP. An online demo is available at https://huggingface.co/spaces/enclap-team/enclap.

    Github

    Demo

  • ICASSP

    Learning Semantic Information from Raw Audio Signal Using Both Contextual and Phonetic Representations

    Jaeyeon Kim*, Injune Hwang, and Kyogu Lee

    We propose a framework to learn semantics from raw audio signals using two types of representations, encoding contextual and phonetic information respectively. Specifically, we introduce a speech-to-unit processing pipeline that captures two types of representations with different time resolutions. For the language model, we adopt a dual-channel architecture to incorporate both types of representation. We also present new training objectives, masked context reconstruction and masked context prediction, that push models to learn semantics effectively. Experiments on the sSIMI metric of Zero Resource Speech Benchmark 2021 and Fluent Speech Command dataset show our framework learns semantics better than models trained with only one type of representation.

2023​

  • ICML Workshop

    PITS: Variational Pitch Inference without Fundamental Frequency for End-to-End Pitch-controllable TTS

    Junhyeok Lee*, Wonbin Jung, Hyungjae Cho, Jaeyeon Kim, and Jaehwan Kim

    Previous pitch-controllable text-to-speech (TTS) models rely on directly modeling fundamental frequency, leading to low variance in synthesized speech.To address this issue, we propose PITS, an end-to-end pitch-controllable TTS model that utilizes variational inference to model pitch. Based on VITS, PITS incorporates the Yingram encoder, the Yingram decoder, and adversarial training of pitch-shifted synthesis to achieve pitch-controllability. Experiments demonstrate that PITS generates high-quality speech that is indistinguishable from ground truth speech and has high pitch-controllability without quality degradation.

    Github

    Demo

  • ICML Workshop

    E3-VITS: Emotional End-to-End TTS with Cross-speaker Style Transfer

    Wonbin Jung* and Junhyeok Lee

    Since previous emotional TTS models are based on a two-stage pipeline or additional labels, their training process is complex and requires a high labeling cost. To deal with this problem, this paper presents E3-VITS, an end-to-end emotional TTS model that addresses the limitations of existing models. E3-VITS synthesizes high-quality speeches for multi-speaker conditions, supports both reference speech and textual description-based emotional speech synthesis, and enables cross-speaker emotion transfer with a disjoint dataset. To implement E3-VITS, we propose batch-permuted style perturbation, which generates audio samples with unpaired emotion to increase the quality of cross-speaker emotion transfer. Results show that E3-VITS outperforms the baseline model in terms of naturalness, speaker and emotion similarity, and inference speed.

    Github

    Demo

  • ICASSP

    PhaseAug: A Differentiable Augmentation for Speech Synthesis to Simulate One-to-Many Mapping

    Junhyeok Lee*, Seungu Han, Hyunjae Cho, and Wonbin Jung

    In this paper, we present PhaseAug, a differentiable augmentation for speech synthesis Previous generative adversarial network (GAN)-based neural vocoders are trained to reconstruct the exact ground truth waveform from the paired mel-spectrogram and do not consider the one-to-many relationship of speech synthesis. This conventional training causes overfitting for both the discriminators and the generator, leading to the periodicity artifacts in the generated audio signal. In this work, we present PhaseAug, the first differentiable augmentation for speech synthesis that rotates the phase of each frequency bin to simulate one-to-many mapping. With our proposed method, we outperform baselines without any architecture modification. Code and audio samples will be available at https://github.com/maum-ai/phaseaug.

    Github

    Demo

2022​

  • Interspeech

    SANE-TTS: Stable And Natural End-to-End Multilingual Text-to-Speech

    Hyunjae Cho*, Wonbin Jung, Junhyeok Lee, and Sang Hoon Woo

    In this paper, we present SANE-TTS, a stable and natural end-to-end multilingual TTS model. By the difficulty of obtaining multilingual corpus for given speaker, training multilingual TTS model with monolingual corpora is unavoidable. We introduce speaker regularization loss that improves speech naturalness during cross-lingual synthesis as well as domain adversarial training, which is applied in other multilingual TTS models. Furthermore, by adding speaker regularization loss, replacing speaker embedding with zero vector in duration predictor stabilizes cross-lingual inference. With this replacement, our model generates speeches with moderate rhythm regardless of source speaker in cross-lingual synthesis. In MOS evaluation, SANE-TTS achieves naturalness score above 3.80 both in cross-lingual and intralingual synthesis, where the ground truth score is 3.99. Also, SANE-TTS maintains speaker similarity close to that of ground truth even in cross-lingual inference. Audio samples are available on our web page.

    Demo

  • Interspeech

    NU-Wave 2: A General Neural Audio Upsampling Model for Various Sampling Rates

    Seungu Han* and Junhyeok Lee

    Conventionally, audio super-resolution models fixed the initial and the target sampling rates, which necessitate the model to be trained for each pair of sampling rates. We introduce NU-Wave 2, a diffusion model for neural audio upsampling that enables the generation of 48 kHz audio signals from inputs of various sampling rates with a single model. Based on the architecture of NU-Wave, NU-Wave 2 uses short-time Fourier convolution (STFC) to generate harmonics to resolve the main failure modes of NU-Wave, and incorporates bandwidth spectral feature transform (BSFT) to condition the bandwidths of inputs in the frequency domain. We experimentally demonstrate that NU-Wave 2 produces high-resolution audio regardless of the sampling rate of input while requiring fewer parameters than other models.

    Github

    Demo

  • PR Letters

    Zero-Shot Semantic Segmentation via Spatial and Multi-Scale Aware Visual Class Embedding

    Sungguk Cha* and Yooseung Wang*

    Fully supervised semantic segmentation technologies bring a paradigm shift in scene understanding. However, the burden of expensive labeling cost remains as a challenge. To solve the cost problem, recent studies proposed language model based zero-shot semantic segmentation (L-ZSSS) approaches. In this paper, we address L-ZSSS has a limitation in generalization which is a virtue of zero-shot learning. Tackling the limitation, we propose a language-model-free zero-shot semantic segmentation framework, Spatial and Multi-scale aware Visual Class Embedding Network (SM-VCENet). Furthermore, leveraging vision-oriented class embedding SM-VCENet enriches visual information of the class embedding by multi-scale attention and spatial attention. We also propose a novel benchmark (PASCAL2COCO) for zero-shot semantic segmentation, which provides generalization evaluation by domain adaptation and contains visually challenging samples. In experiments, our SM-VCENet outperforms zero-shot semantic segmentation state-of-the-art by a significant margin in both PASCAL-5i and PASCAL2COCO benchmarks.

  • CVPR Demo

    Talking Face Generation with Multilingual TTS

    Hyoung-Kyu Song*, Sang Hoon Woo*, Junhyeok Lee, Seungmin Yang, Hyunjae Cho, Dongho Choi, Kang-wook Kim, and Youseong Lee

    In this work, we propose a joint system combining a talking face generation system with a text-to-speech system that can generate multilingual talking face videos from only the text input. Our system can synthesize natural multilingual speeches while maintaining the vocal identity of the speaker, as well as lip movements synchronized to the synthesized speech. We demonstrate the generalization capabilities of our system by selecting four languages (Korean, English, Japanese, and Chinese) each from a different language family. We also compare the outputs of our talking face generation model to outputs of a prior work that claims multilingual support. For our demo, we add a translation API to the preprocessing stage and present it in the form of a neural dubber so that users can utilize the multilingual property of our system more easily.

    πŸ€—Demo

    Screencast

  • CVPR

    Styleformer: Transformer based Generative Adversarial Networks with Style Vector

    Jeeseung Park* and Younggeun Kim*

    We propose Styleformer, which is a style-based generator for GAN architecture, but a convolution-free transformer-based generator. In our paper, we explain how a transformer can generate high-quality images, overcoming the disadvantage that convolution operations are difficult to capture global features in an image. Furthermore, we change the demodulation of StyleGAN2 and modify the existing transformer structure (e.g., residual connection, layer normalization) to create a strong style-based generator with a convolution-free structure. We also make Styleformer lighter by applying Linformer, enabling Styleformer to generate higher resolution images and result in improvements in terms of speed and memory. We experiment with the low-resolution image dataset such as CIFAR-10, as well as the high-resolution image dataset like LSUN-church. Styleformer records FID 2.82 and IS 9.94 on CIFAR-10, a benchmark dataset, which is comparable performance to the current state-of-the-art and outperforms all GAN-based generative models, including StyleGAN2-ADA with fewer parameters on the unconditional setting. We also both achieve new state-of-the-art with FID 15.17, IS 11.01, and FID 3.66, respectively on STL-10 and CelebA.

    Github

  • ICASSP

    Assem-VC: Realistic Voice Conversion by Assembling Modern Speech Synthesis Techniques

    Kang-wook Kim*, Seung-won Park, Junhyeok Lee, and Myun-chul Joe

    Recent works on voice conversion (VC) focus on preserving the rhythm and the intonation as well as the linguistic content. To preserve these features from the source, we decompose current non-parallel VC systems into two encoders and one decoder. We analyze each module with several experiments and reassemble the best components to propose Assem-VC, a new state-of-the-art any-to-many non-parallel VC system. We also examine that PPG and Cotatron features are speaker-dependent, and attempt to remove speaker identity with adversarial training.

    Github

2021​

  • NeurIPS Workshop (Oral)

    Controllable and Interpretable Singing Voice Decomposition via Assem-VC

    Kang-wook Kim* and Junhyeok Lee

    We propose a singing decomposition system that encodes time-aligned linguistic content, pitch, and source speaker identity via Assem-VC. With decomposed speaker-independent information and the target speaker's embedding, we could synthesize the singing voice of the target speaker. In conclusion, we made a perfectly synced duet with the user's singing voice and the target singer's converted singing voice.

    Github

    Demo

  • Interspeech

    NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling

    Junhyeok Lee* and Seungu Han

    In this work, we introduce NU-Wave, the first neural audio upsampling model to produce waveforms of sampling rate 48kHz from coarse 16kHz or 24kHz inputs, while prior works could generate only up to 16kHz. NU-Wave is the first diffusion probabilistic model for audio super-resolution which is engineered based on neural vocoders. NU-Wave generates high-quality audio that achieves high performance in terms of signal-to-noise ratio (SNR), log-spectral distance (LSD), and accuracy of the ABX test. In all cases, NU-Wave outperforms the baseline models despite the substantially smaller model capacity (3.0M parameters) than baselines (5.4-21%).

    Github

    Demo

  • CVPR Workshop

    Sharp Edge Recovery via SE(3)-Equivariant Network

    Youseong Lee*

    Talk

2020​

  • NeurIPS Workshop

    DS4C Patient Policy Province Dataset: A Comprehensive COVID-19 Dataset for Causal and Epidemiological Analysis

    Jimi Kim*, DongHwan Jang, Seojin Jang, Woncheol Lee, and Joong Kun Lee

    Workshop

  • Interspeech

    Cotatron: Transcription-Guided Speech Encoder for Any-to-Many Voice Conversion without Parallel Data

    Seung-won Park*, Doo-young Kim, and Myun-chul Joe

    We propose Cotatron, a transcription-guided speech encoder for speaker-independent linguistic representation. Cotatron is based on the multispeaker TTS architecture and can be trained with conventional TTS datasets. We train a voice conversion system to reconstruct speech with Cotatron features, which is similar to the previous methods based on Phonetic Posteriorgram (PPG). By training and evaluating our system with 108 speakers from the VCTK dataset, we outperform the previous method in terms of both naturalness and speaker similarity. Our system can also convert speech from speakers that are unseen during training, and utilize ASR to automate the transcription with minimal reduction of the performance.

    Github

    Demo

    Talk

  • ECCV Workshop

    3D Room Layout Estimation Beyond the Manhattan World Assumption

    Dongho Choi*

    Predicting 3D room layout from single image is a challenging task with many applications. In this paper, we propose a new training and post-processing method for 3D room layout estimation, built on a recent state-of-the-art 3D room layout estimation model. Experimental results show our method outperforms state-of-the-art approaches by a large margin in predicting visible room layout. Our method has obtained the 3rd place in 2020 Holistic Scene Structures for 3D Vision Workshop.