内容摘录
<h1 align="center">Deep-Live-Cam 2.0.2c</h1>
<p align="center">
Real-time face swap and video deepfake with a single click and only a single image.
</p>
<p align="center">
<a href="https://trendshift.io/repositories/11395" target="_blank"><img src="https://trendshift.io/api/badge/repositories/11395" alt="hacksider%2FDeep-Live-Cam | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
</p>
<p align="center">
<img src="media/demo.gif" alt="Demo GIF" width="800">
</p>
Disclaimer
This deepfake software is designed to be a productive tool for the AI-generated media industry. It can assist artists in animating custom characters, creating engaging content, and even using models for clothing design.
We are aware of the potential for unethical applications and are committed to preventative measures. A built-in check prevents the program from processing inappropriate media (nudity, graphic content, sensitive material like war footage, etc.). We will continue to develop this project responsibly, adhering to the law and ethics. We may shut down the project or add watermarks if legally required.
Ethical Use: Users are expected to use this software responsibly and legally. If using a real person's face, obtain their consent and clearly label any output as a deepfake when sharing online.
Content Restrictions: The software includes built-in checks to prevent processing inappropriate media, such as nudity, graphic content, or sensitive material.
Legal Compliance: We adhere to all relevant laws and ethical guidelines. If legally required, we may shut down the project or add watermarks to the output.
User Responsibility: We are not responsible for end-user actions. Users must ensure their use of the software aligns with ethical standards and legal requirements.
By using this software, you agree to these terms and commit to using it in a manner that respects the rights and dignity of others.
Users are expected to use this software responsibly and legally. If using a real person's face, obtain their consent and clearly label any output as a deepfake when sharing online. We are not responsible for end-user actions.
Exclusive v2.6 Quick Start - Pre-built (Windows/Mac Silicon)
<a href="https://deeplivecam.net/index.php/quickstart"> <img src="media/Download.png" width="285" height="77" />
This is the fastest build you can get if you have a discrete NVIDIA or AMD GPU or Mac Silicon, And you'll receive special priority support.
These Pre-builts are perfect for non-technical users or those who don't have time to, or can't manually install all the requirements. Just a heads-up: this is an open-source project, so you can also install it manually.
TLDR; Live Deepfake in just 3 Clicks
!easysteps
Select a face
Select which camera to use
Press live!
Features & Uses - Everything is in real-time
Mouth Mask
**Retain your original mouth for accurate movement using Mouth Mask**
<p align="center">
<img src="media/ludwig.gif" alt="resizable-gif">
</p>
Face Mapping
**Use different faces on multiple subjects simultaneously**
<p align="center">
<img src="media/streamers.gif" alt="face_mapping_source">
</p>
Your Movie, Your Face
**Watch movies with any face in real-time**
<p align="center">
<img src="media/movie.gif" alt="movie">
</p>
Live Show
**Run Live shows and performances**
<p align="center">
<img src="media/live_show.gif" alt="show">
</p>
Memes
**Create Your Most Viral Meme Yet**
<p align="center">
<img src="media/meme.gif" alt="show" width="450">
<br>
<sub>Created using Many Faces feature in Deep-Live-Cam</sub>
</p>
Omegle
**Surprise people on Omegle**
<p align="center">
<video src="https://github.com/user-attachments/assets/2e9b9b82-fa04-4b70-9f56-b1f68e7672d0" width="450" controls></video>
</p>
Installation (Manual)
**Please be aware that the installation requires technical skills and is not for beginners. Consider downloading the quickstart version.**
<details>
<summary>Click to see the process</summary>
Installation
This is more likely to work on your computer but will be slower as it utilizes the CPU.
**1. Set up Your Platform**
Python (3.11 recommended)
pip
git
ffmpeg -
Visual Studio 2022 Runtimes (Windows)
**2. Clone the Repository**
**3. Download the Models**
GFPGANv1.4
inswapper\_128\_fp16.onnx
Place these files in the "**models**" folder.
**4. Install Dependencies**
We highly recommend using a venv to avoid issues.
For Windows:
For Linux:
**For macOS:**
Apple Silicon (M1/M2/M3) requires specific setup:
** In case something goes wrong and you need to reinstall the virtual environment **
**Run:** If you don't have a GPU, you can run Deep-Live-Cam using python run.py. Note that initial execution will download models (~300MB).
GPU Acceleration
**CUDA Execution Provider (Nvidia)**
Install CUDA Toolkit 12.8.0
Install cuDNN v8.9.7 for CUDA 12.x (required for onnxruntime-gpu):
Download cuDNN v8.9.7 for CUDA 12.x
Make sure the cuDNN bin directory is in your system PATH
Install dependencies:
Usage:
**CoreML Execution Provider (Apple Silicon)**
Apple Silicon (M1/M2/M3) specific installation:
Make sure you've completed the macOS setup above using Python 3.10.
Install dependencies:
Usage (important: specify Python 3.10):
**Important Notes for macOS:**
You **must** use Python 3.10, not newer versions like 3.11 or 3.13
Always run with python3.10 command not just python if you have multiple Python versions installed
If you get error about _tkinter missing, reinstall the tkinter package: brew reinstall python-tk@3.10
If you get model loading errors, check that your models are in the correct folder
If you encounter conflicts with other Python versions, consider uninstalling them:
**CoreML Execution Provider (Apple Legacy)**
Install dependencies:
Usage:
**DirectML Execution Provider (Windows)**
Install dependencies:
Usage:
**OpenVINO™ Execution Provider (Intel)**
Install dependencies:
Usage:
</details>
Usage
**1. Image/Video Mode**
Execute python r…