Stars
Stress test comparing HC vs mHC signal propagation stability in deep neural networks
A flexible PyTorch template for ML experiments with configuration management, logging, and hyperparameter optimization.
구직자의 이력서와 채용 공고를 매칭해주고, 채용 공고에 맞는 이력서를 검색하여 추천해주는 채용공고 매칭시스템 Resources
[ ICLR 2025 ] Making LLMs More Effective with Hierarchical Mixture of LoRA Experts
[ICCV 2025] Official code for paper: Beyond Text-Visual Attention: Exploiting Visual Cues for Effective Token Pruning in VLMs
[CIKM 2025] This is the repository for the paper titled "Exploring Diverse Sparse Network Structures via Dynamic Pruning with Weight Alignment"
OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, Diffusion, LLM
Official PyTorch implementation of QwT—“Quantization without Tears” (CVPR 2025): fast, accurate, and hassle-free post-training network quantization with lightweight linear compensation layers.
Paper2Code: Automating Code Generation from Scientific Papers in Machine Learning
The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.
Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
Thesis/Dissertation typst template for Yonsei University
[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
Unofficial implementation of https://arxiv.org/pdf/2407.14679
[NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning
Compressed LLMs for Efficient Text Generation [ICLR'24 Workshop]
[UAI 2024] This is the repository for the paper titled "Cooperative Meta-Learning with Gradient Augmentation" https://arxiv.org/abs/2406.04639
Official implementation of the CVPR 2024 paper ViT-CoMer: Vision Transformer with Convolutional Multi-scale Feature Interaction for Dense Predictions.
[ICLR 2023 Spotlight] Vision Transformer Adapter for Dense Predictions
A high-throughput and memory-efficient inference and serving engine for LLMs
The official MCP server implementation for the Perplexity API Platform
[NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baichuan, TinyLlama, etc.



