Compare commits
153 Commits
Author | SHA1 | Date |
---|---|---|
|
181144ce33 | |
|
40e47a469c | |
|
874abb4e59 | |
|
18b259da70 | |
|
01900dcfb5 | |
|
07e30fe781 | |
|
3dda4f2179 | |
|
71735e4f60 | |
|
90d5c28542 | |
|
104d8cf4d6 | |
|
ac3696b69d | |
|
76fb209e6c | |
|
2dcd552c4b | |
|
66248a37b4 | |
|
aa9b7ed3b6 | |
|
51a4246050 | |
|
3f1c072fac | |
|
f91f9203e7 | |
|
80477676b4 | |
|
c728994e6b | |
|
65da3be2a4 | |
|
390b88216b | |
|
dabaa64695 | |
|
1fad1cd43a | |
|
2f67e2f159 | |
|
a3af249ea6 | |
|
5bc3ada632 | |
|
650e89eb21 | |
|
4d2aea37b7 | |
|
28c4b34db1 | |
|
49e8f78513 | |
|
d753f5d4b0 | |
|
4fb69476d8 | |
|
f3adfd194d | |
|
e5f04cf917 | |
|
67394a3157 | |
|
186d155e1b | |
|
87081e78d0 | |
|
f79373d4db | |
|
513e413956 | |
|
f82cebf86e | |
|
d45dedc9a6 | |
|
2d489b57ec | |
|
ccc04983cf | |
|
2506c5a261 | |
|
e862ff1456 | |
|
db594c0e7c | |
|
6a5b75ec45 | |
|
79e1ce5093 | |
|
fda4878bfd | |
|
5ff922e2a4 | |
|
9ed5a72289 | |
|
0c8e2d5794 | |
|
a0aafbc97c | |
|
f95b07423b | |
|
3947053c89 | |
|
0e6a6f84f5 | |
|
bb331a6db0 | |
|
ec48b0048f | |
|
acc4812551 | |
|
87ee05d7b3 | |
|
ce03dbf200 | |
|
704aeb73b1 | |
|
f5c8290e1c | |
|
f164d9234b | |
|
74009c1d5d | |
|
e6a1c8dd95 | |
|
0e3f2c8dc0 | |
|
464dc2a0aa | |
|
a05754fb28 | |
|
9727f34923 | |
|
a86544a4b4 | |
|
979da7aa1d | |
|
4a37bb2a97 | |
|
21d3c8766a | |
|
ee19c5158a | |
|
693c9bb268 | |
|
5132f86cdc | |
|
cab2efa200 | |
|
6e29e4061b | |
|
2a7ae010a8 | |
|
a834811974 | |
|
d2aaf46e69 | |
|
d07d4a6a26 | |
|
09f0343639 | |
|
75913c513e | |
|
7f38539508 | |
|
b38831dfdf | |
|
b518f4337d | |
|
7def969831 | |
|
6bf503e669 | |
|
28513d6c1f | |
|
f6abe502b6 | |
|
b38ef62447 | |
|
a3469b7bd4 | |
|
c03f697729 | |
|
742bcab130 | |
|
22940d1b99 | |
|
d8a5cdbc19 | |
|
6219da4b1b | |
|
22e1110ec4 | |
|
82d5d34912 | |
|
60e82ea200 | |
|
8be7368949 | |
|
5003c04386 | |
|
aed933c1db | |
|
a50ea98bc2 | |
|
6a9bf2acfb | |
|
395cecf11d | |
|
ebf4e95c3a | |
|
5974ba2a68 | |
|
75c53ac7aa | |
|
8aeb406ea2 | |
|
8b3bd734cf | |
|
b0aac8bd04 | |
|
9dc3c3e9c2 | |
|
21989d4a49 | |
|
b97185d2bf | |
|
81da9a23ca | |
|
007867a6f6 | |
|
7ec9d61608 | |
|
eeff1a87fa | |
|
bc1149cd80 | |
|
11c10b354f | |
|
71aae3fe07 | |
|
b995eca033 | |
|
b17e52dea2 | |
|
3a858847e3 | |
|
77c19d1073 | |
|
7472dfb694 | |
|
41c6916273 | |
|
ed7a21687c | |
|
5ce991651d | |
|
432984b3b6 | |
|
47c8f7acc0 | |
|
606137c58f | |
|
76b94ac034 | |
|
84ca1dc2f2 | |
|
681c20dbbd | |
|
c240f6e31c | |
|
ba9d58e04e | |
|
4bb979faf0 | |
|
eae69c4b47 | |
|
f7823906d1 | |
|
a1d9b73742 | |
|
5f5fe8890a | |
|
a9e8f27360 | |
|
de4f765878 | |
|
c72582506d | |
|
7fb6b54c0b | |
|
d6236a0eed | |
|
6171141505 | |
|
08adb53b8f |
|
@ -1 +1,38 @@
|
|||
Please always push on the experimental to ensure we don't mess with the main branch. All the test will be done on the experimental and will be pushed to the main branch after few days of testing.
|
||||
# Collaboration Guidelines and Codebase Quality Standards
|
||||
|
||||
To ensure smooth collaboration and maintain the high quality of our codebase, please adhere to the following guidelines:
|
||||
|
||||
## Branching Strategy
|
||||
|
||||
* **`premain`**:
|
||||
* Always push your changes to the `premain` branch initially.
|
||||
* This safeguards the `main` branch from unintentional disruptions.
|
||||
* All tests will be performed on the `premain` branch.
|
||||
* Changes will only be merged into `main` after several hours or days of rigorous testing.
|
||||
* **`experimental`**:
|
||||
* For large or potentially disruptive changes, use the `experimental` branch.
|
||||
* This allows for thorough discussion and review before considering a merge into `main`.
|
||||
|
||||
## Pre-Pull Request Checklist
|
||||
|
||||
Before creating a Pull Request (PR), ensure you have completed the following tests:
|
||||
|
||||
### Functionality
|
||||
|
||||
* **Realtime Faceswap**:
|
||||
* Test with face enhancer **enabled** and **disabled**.
|
||||
* **Map Faces**:
|
||||
* Test with both options (**enabled** and **disabled**).
|
||||
* **Camera Listing**:
|
||||
* Verify that all cameras are listed accurately.
|
||||
|
||||
### Stability
|
||||
|
||||
* **Realtime FPS**:
|
||||
* Confirm that there is no drop in real-time frames per second (FPS).
|
||||
* **Boot Time**:
|
||||
* Changes should not negatively impact the boot time of either the application or the real-time faceswap feature.
|
||||
* **GPU Overloading**:
|
||||
* Test for a minimum of 15 minutes to guarantee no GPU overloading, which could lead to crashes.
|
||||
* **App Performance**:
|
||||
* The application should remain responsive and not exhibit any lag.
|
||||
|
|
467
README.md
|
@ -9,50 +9,123 @@
|
|||
</p>
|
||||
|
||||
<p align="center">
|
||||
<img src="media/demo.gif" alt="Demo GIF">
|
||||
<img src="media/avgpcperformancedemo.gif" alt="Performance Demo GIF">
|
||||
<img src="media/demo.gif" alt="Demo GIF" width="800">
|
||||
</p>
|
||||
|
||||
## Disclaimer
|
||||
|
||||
## Disclaimer
|
||||
This deepfake software is designed to be a productive tool for the AI-generated media industry. It can assist artists in animating custom characters, creating engaging content, and even using models for clothing design.
|
||||
|
||||
This software is intended as a productive contribution to the AI-generated media industry. It aims to assist artists with tasks like animating custom characters or using them as models for clothing, etc.
|
||||
We are aware of the potential for unethical applications and are committed to preventative measures. A built-in check prevents the program from processing inappropriate media (nudity, graphic content, sensitive material like war footage, etc.). We will continue to develop this project responsibly, adhering to the law and ethics. We may shut down the project or add watermarks if legally required.
|
||||
|
||||
We are aware of the potential for unethical applications and are committed to preventative measures. A built-in check prevents the program from processing inappropriate media (nudity, graphic content, sensitive material like war footage, etc.). We will continue to develop this project responsibly, adhering to law and ethics. We may shut down the project or add watermarks if legally required.
|
||||
- Ethical Use: Users are expected to use this software responsibly and legally. If using a real person's face, obtain their consent and clearly label any output as a deepfake when sharing online.
|
||||
|
||||
- Content Restrictions: The software includes built-in checks to prevent processing inappropriate media, such as nudity, graphic content, or sensitive material.
|
||||
|
||||
- Legal Compliance: We adhere to all relevant laws and ethical guidelines. If legally required, we may shut down the project or add watermarks to the output.
|
||||
|
||||
- User Responsibility: We are not responsible for end-user actions. Users must ensure their use of the software aligns with ethical standards and legal requirements.
|
||||
|
||||
By using this software, you agree to these terms and commit to using it in a manner that respects the rights and dignity of others.
|
||||
|
||||
Users are expected to use this software responsibly and legally. If using a real person's face, obtain their consent and clearly label any output as a deepfake when sharing online. We are not responsible for end-user actions.
|
||||
|
||||
## Exclusive v2.0 Quick Start - Pre-built (Windows / Nvidia)
|
||||
|
||||
## Quick Start (Windows / Nvidia)
|
||||
<a href="https://deeplivecam.net/index.php/quickstart"> <img src="https://github.com/user-attachments/assets/7d993b32-e3e8-4cd3-bbfb-a549152ebdd5" width="285" height="77" />
|
||||
|
||||
[](https://hacksider.gumroad.com/l/vccdmm)
|
||||
##### This is the fastest build you can get if you have a discrete NVIDIA GPU.
|
||||
|
||||
###### These Pre-builts are perfect for non-technical users or those who don't have time to, or can't manually install all the requirements. Just a heads-up: this is an open-source project, so you can also install it manually. This will be 60 days ahead on the open source version.
|
||||
|
||||
[Download latest pre-built version with CUDA support](https://hacksider.gumroad.com/l/vccdmm) - No Manual Installation/Downloading required.
|
||||
## TLDR; Live Deepfake in just 3 Clicks
|
||||

|
||||
1. Select a face
|
||||
2. Select which camera to use
|
||||
3. Press live!
|
||||
|
||||
## Features & Uses - Everything is in real-time
|
||||
|
||||
### Mouth Mask
|
||||
|
||||
**Retain your original mouth for accurate movement using Mouth Mask**
|
||||
|
||||
<p align="center">
|
||||
<img src="media/ludwig.gif" alt="resizable-gif">
|
||||
</p>
|
||||
|
||||
### Face Mapping
|
||||
|
||||
**Use different faces on multiple subjects simultaneously**
|
||||
|
||||
<p align="center">
|
||||
<img src="media/streamers.gif" alt="face_mapping_source">
|
||||
</p>
|
||||
|
||||
### Your Movie, Your Face
|
||||
|
||||
**Watch movies with any face in real-time**
|
||||
|
||||
<p align="center">
|
||||
<img src="media/movie.gif" alt="movie">
|
||||
</p>
|
||||
|
||||
### Live Show
|
||||
|
||||
**Run Live shows and performances**
|
||||
|
||||
<p align="center">
|
||||
<img src="media/live_show.gif" alt="show">
|
||||
</p>
|
||||
|
||||
### Memes
|
||||
|
||||
**Create Your Most Viral Meme Yet**
|
||||
|
||||
<p align="center">
|
||||
<img src="media/meme.gif" alt="show" width="450">
|
||||
<br>
|
||||
<sub>Created using Many Faces feature in Deep-Live-Cam</sub>
|
||||
</p>
|
||||
|
||||
### Omegle
|
||||
|
||||
**Surprise people on Omegle**
|
||||
|
||||
<p align="center">
|
||||
<video src="https://github.com/user-attachments/assets/2e9b9b82-fa04-4b70-9f56-b1f68e7672d0" width="450" controls></video>
|
||||
</p>
|
||||
|
||||
## Installation (Manual)
|
||||
**Please be aware that the installation needs technical skills and is NOT for beginners, consider downloading the prebuilt. Please do NOT open platform and installation related issues on GitHub before discussing it on the discord server.**
|
||||
### Basic Installation (CPU)
|
||||
|
||||
**Please be aware that the installation requires technical skills and is not for beginners. Consider downloading the prebuilt version.**
|
||||
|
||||
<details>
|
||||
<summary>Click to see the process</summary>
|
||||
|
||||
### Installation
|
||||
|
||||
This is more likely to work on your computer but will be slower as it utilizes the CPU.
|
||||
|
||||
**1. Setup Your Platform**
|
||||
**1. Set up Your Platform**
|
||||
|
||||
- Python (3.10 recommended)
|
||||
- pip
|
||||
- git
|
||||
- [ffmpeg](https://www.youtube.com/watch?v=OlNWCpFdVMA)
|
||||
- [Visual Studio 2022 Runtimes (Windows)](https://visualstudio.microsoft.com/visual-cpp-build-tools/)
|
||||
- Python (3.10 recommended)
|
||||
- pip
|
||||
- git
|
||||
- [ffmpeg](https://www.youtube.com/watch?v=OlNWCpFdVMA) - ```iex (irm ffmpeg.tc.ht)```
|
||||
- [Visual Studio 2022 Runtimes (Windows)](https://visualstudio.microsoft.com/visual-cpp-build-tools/)
|
||||
|
||||
**2. Clone Repository**
|
||||
**2. Clone the Repository**
|
||||
|
||||
```bash
|
||||
https://github.com/hacksider/Deep-Live-Cam.git
|
||||
git clone https://github.com/hacksider/Deep-Live-Cam.git
|
||||
cd Deep-Live-Cam
|
||||
```
|
||||
|
||||
**3. Download Models**
|
||||
**3. Download the Models**
|
||||
|
||||
1. [GFPGANv1.4](https://huggingface.co/hacksider/deep-live-cam/resolve/main/GFPGANv1.4.pth)
|
||||
2. [inswapper_128_fp16.onnx](https://huggingface.co/hacksider/deep-live-cam/resolve/main/inswapper_128.onnx) (Note: Use this [replacement version](https://github.com/facefusion/facefusion-assets/releases/download/models/inswapper_128.onnx) if you encounter issues)
|
||||
2. [inswapper\_128\_fp16.onnx](https://huggingface.co/hacksider/deep-live-cam/resolve/main/inswapper_128_fp16.onnx)
|
||||
|
||||
Place these files in the "**models**" folder.
|
||||
|
||||
|
@ -60,57 +133,112 @@ Place these files in the "**models**" folder.
|
|||
|
||||
We highly recommend using a `venv` to avoid issues.
|
||||
|
||||
For Windows:
|
||||
```bash
|
||||
python -m venv venv
|
||||
venv\Scripts\activate
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
**For macOS:** Install or upgrade the `python-tk` package:
|
||||
**For macOS:**
|
||||
|
||||
Apple Silicon (M1/M2/M3) requires specific setup:
|
||||
|
||||
```bash
|
||||
# Install Python 3.10 (specific version is important)
|
||||
brew install python@3.10
|
||||
|
||||
# Install tkinter package (required for the GUI)
|
||||
brew install python-tk@3.10
|
||||
|
||||
# Create and activate virtual environment with Python 3.10
|
||||
python3.10 -m venv venv
|
||||
source venv/bin/activate
|
||||
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
** In case something goes wrong and you need to reinstall the virtual environment **
|
||||
|
||||
```bash
|
||||
# Deactivate the virtual environment
|
||||
rm -rf venv
|
||||
|
||||
# Reinstall the virtual environment
|
||||
python -m venv venv
|
||||
source venv/bin/activate
|
||||
|
||||
# install the dependencies again
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
**Run:** If you don't have a GPU, you can run Deep-Live-Cam using `python run.py`. Note that initial execution will download models (~300MB).
|
||||
|
||||
|
||||
### GPU Acceleration (Optional)
|
||||
|
||||
<details>
|
||||
<summary>Click to see the details</summary>
|
||||
### GPU Acceleration
|
||||
|
||||
**CUDA Execution Provider (Nvidia)**
|
||||
|
||||
1. Install [CUDA Toolkit 11.8](https://developer.nvidia.com/cuda-11-8-0-download-archive)
|
||||
1. Install [CUDA Toolkit 11.8.0](https://developer.nvidia.com/cuda-11-8-0-download-archive)
|
||||
2. Install dependencies:
|
||||
|
||||
```bash
|
||||
pip uninstall onnxruntime onnxruntime-gpu
|
||||
pip install onnxruntime-gpu==1.16.3
|
||||
```
|
||||
|
||||
3. Usage:
|
||||
|
||||
```bash
|
||||
python run.py --execution-provider cuda
|
||||
```
|
||||
|
||||
**CoreML Execution Provider (Apple Silicon)**
|
||||
|
||||
1. Install dependencies:
|
||||
Apple Silicon (M1/M2/M3) specific installation:
|
||||
|
||||
1. Make sure you've completed the macOS setup above using Python 3.10.
|
||||
2. Install dependencies:
|
||||
|
||||
```bash
|
||||
pip uninstall onnxruntime onnxruntime-silicon
|
||||
pip install onnxruntime-silicon==1.13.1
|
||||
```
|
||||
2. Usage:
|
||||
|
||||
3. Usage (important: specify Python 3.10):
|
||||
|
||||
```bash
|
||||
python run.py --execution-provider coreml
|
||||
python3.10 run.py --execution-provider coreml
|
||||
```
|
||||
|
||||
**Important Notes for macOS:**
|
||||
- You **must** use Python 3.10, not newer versions like 3.11 or 3.13
|
||||
- Always run with `python3.10` command not just `python` if you have multiple Python versions installed
|
||||
- If you get error about `_tkinter` missing, reinstall the tkinter package: `brew reinstall python-tk@3.10`
|
||||
- If you get model loading errors, check that your models are in the correct folder
|
||||
- If you encounter conflicts with other Python versions, consider uninstalling them:
|
||||
```bash
|
||||
# List all installed Python versions
|
||||
brew list | grep python
|
||||
|
||||
# Uninstall conflicting versions if needed
|
||||
brew uninstall --ignore-dependencies python@3.11 python@3.13
|
||||
|
||||
# Keep only Python 3.10
|
||||
brew cleanup
|
||||
```
|
||||
|
||||
**CoreML Execution Provider (Apple Legacy)**
|
||||
|
||||
1. Install dependencies:
|
||||
|
||||
```bash
|
||||
pip uninstall onnxruntime onnxruntime-coreml
|
||||
pip install onnxruntime-coreml==1.13.1
|
||||
```
|
||||
|
||||
2. Usage:
|
||||
|
||||
```bash
|
||||
python run.py --execution-provider coreml
|
||||
```
|
||||
|
@ -118,11 +246,14 @@ python run.py --execution-provider coreml
|
|||
**DirectML Execution Provider (Windows)**
|
||||
|
||||
1. Install dependencies:
|
||||
|
||||
```bash
|
||||
pip uninstall onnxruntime onnxruntime-directml
|
||||
pip install onnxruntime-directml==1.15.1
|
||||
```
|
||||
|
||||
2. Usage:
|
||||
|
||||
```bash
|
||||
python run.py --execution-provider directml
|
||||
```
|
||||
|
@ -130,75 +261,51 @@ python run.py --execution-provider directml
|
|||
**OpenVINO™ Execution Provider (Intel)**
|
||||
|
||||
1. Install dependencies:
|
||||
|
||||
```bash
|
||||
pip uninstall onnxruntime onnxruntime-openvino
|
||||
pip install onnxruntime-openvino==1.15.0
|
||||
```
|
||||
|
||||
2. Usage:
|
||||
|
||||
```bash
|
||||
python run.py --execution-provider openvino
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
**1. Image/Video Mode**
|
||||
|
||||
- Execute `python run.py`.
|
||||
- Choose a source face image and a target image/video.
|
||||
- Click "Start".
|
||||
- The output will be saved in a directory named after the target video.
|
||||
- Execute `python run.py`.
|
||||
- Choose a source face image and a target image/video.
|
||||
- Click "Start".
|
||||
- The output will be saved in a directory named after the target video.
|
||||
|
||||
**2. Webcam Mode**
|
||||
|
||||
- Execute `python run.py`.
|
||||
- Select a source face image.
|
||||
- Click "Live".
|
||||
- Wait for the preview to appear (10-30 seconds).
|
||||
- Use a screen capture tool like OBS to stream.
|
||||
- To change the face, select a new source image.
|
||||
- Execute `python run.py`.
|
||||
- Select a source face image.
|
||||
- Click "Live".
|
||||
- Wait for the preview to appear (10-30 seconds).
|
||||
- Use a screen capture tool like OBS to stream.
|
||||
- To change the face, select a new source image.
|
||||
|
||||

|
||||
## Tips and Tricks
|
||||
|
||||
## Features
|
||||
Check out these helpful guides to get the most out of Deep-Live-Cam:
|
||||
|
||||
### Resizable Preview Window
|
||||
- [Unlocking the Secrets to the Perfect Deepfake Image](https://deeplivecam.net/index.php/blog/tips-and-tricks/unlocking-the-secrets-to-the-perfect-deepfake-image) - Learn how to create the best deepfake with full head coverage
|
||||
- [Video Call with DeepLiveCam](https://deeplivecam.net/index.php/blog/tips-and-tricks/video-call-with-deeplivecam) - Make your meetings livelier by using DeepLiveCam with OBS and meeting software
|
||||
- [Have a Special Guest!](https://deeplivecam.net/index.php/blog/tips-and-tricks/have-a-special-guest) - Tutorial on how to use face mapping to add special guests to your stream
|
||||
- [Watch Deepfake Movies in Realtime](https://deeplivecam.net/index.php/blog/tips-and-tricks/watch-deepfake-movies-in-realtime) - See yourself star in any video without processing the video
|
||||
- [Better Quality without Sacrificing Speed](https://deeplivecam.net/index.php/blog/tips-and-tricks/better-quality-without-sacrificing-speed) - Tips for achieving better results without impacting performance
|
||||
- [Instant Vtuber!](https://deeplivecam.net/index.php/blog/tips-and-tricks/instant-vtuber) - Create a new persona/vtuber easily using Metahuman Creator
|
||||
|
||||
Dynamically improve performance using the `--live-resizable` parameter.
|
||||
Visit our [official blog](https://deeplivecam.net/index.php/blog/tips-and-tricks) for more tips and tutorials.
|
||||
|
||||

|
||||
|
||||
### Face Mapping
|
||||
|
||||
Track and change faces on the fly.
|
||||
|
||||

|
||||
|
||||
**Source Video:**
|
||||
|
||||

|
||||
|
||||
**Enable Face Mapping:**
|
||||
|
||||

|
||||
|
||||
**Map the Faces:**
|
||||
|
||||

|
||||
|
||||
**See the Magic!**
|
||||
|
||||

|
||||
|
||||
**Watch movies in realtime:**
|
||||
|
||||
It's as simple as opening a movie on the screen, and selecting OBS as your camera!
|
||||

|
||||
|
||||
|
||||
## Command Line Arguments
|
||||
## Command Line Arguments (Unmaintained)
|
||||
|
||||
```
|
||||
options:
|
||||
|
@ -212,7 +319,7 @@ options:
|
|||
--keep-frames keep temporary frames
|
||||
--many-faces process every face
|
||||
--map-faces map source target faces
|
||||
--nsfw-filter filter the NSFW image or video
|
||||
--mouth-mask mask the mouth region
|
||||
--video-encoder {libx264,libx265,libvpx-vp9} adjust output video encoder
|
||||
--video-quality [0-51] adjust output video quality
|
||||
--live-mirror the live camera display as you see it in the front-facing camera frame
|
||||
|
@ -225,188 +332,44 @@ options:
|
|||
|
||||
Looking for a CLI mode? Using the -s/--source argument will make the run program in cli mode.
|
||||
|
||||
## Press
|
||||
|
||||
## Webcam Mode on WSL2 Ubuntu (Optional)
|
||||
|
||||
<details>
|
||||
<summary>Click to see the details</summary>
|
||||
|
||||
If you want to use WSL2 on Windows 11 you will notice, that Ubuntu WSL2 doesn't come with USB-Webcam support in the Kernel. You need to do two things: Compile the Kernel with the right modules integrated and forward your USB Webcam from Windows to Ubuntu with the usbipd app. Here are detailed Steps:
|
||||
|
||||
This tutorial will guide you through the process of setting up WSL2 Ubuntu with USB webcam support, rebuilding the kernel, and preparing the environment for the Deep-Live-Cam project.
|
||||
|
||||
**1. Install WSL2 Ubuntu**
|
||||
|
||||
Install WSL2 Ubuntu from the Microsoft Store or using PowerShell:
|
||||
|
||||
**2. Enable USB Support in WSL2**
|
||||
|
||||
1. Install the USB/IP tool for Windows:
|
||||
[https://learn.microsoft.com/en-us/windows/wsl/connect-usb](https://learn.microsoft.com/en-us/windows/wsl/connect-usb)
|
||||
|
||||
2. In Windows PowerShell (as Administrator), connect your webcam to WSL:
|
||||
|
||||
```powershell
|
||||
usbipd list
|
||||
usbipd bind --busid x-x # Replace x-x with your webcam's bus ID
|
||||
usbipd attach --wsl --busid x-x # Replace x-x with your webcam's bus ID
|
||||
```
|
||||
You need to redo the above every time you reboot wsl or re-connect your webcam/usb device.
|
||||
|
||||
**3. Rebuild WSL2 Ubuntu Kernel with USB and Webcam Modules**
|
||||
|
||||
Follow these steps to rebuild the kernel:
|
||||
|
||||
1. Start with this guide: [https://github.com/PINTO0309/wsl2_linux_kernel_usbcam_enable_conf](https://github.com/PINTO0309/wsl2_linux_kernel_usbcam_enable_conf)
|
||||
|
||||
2. When you reach the `sudo wget [github.com](http://github.com/)...PINTO0309` step, which won't work for newer kernel versions, follow this video instead or alternatively follow the video tutorial from the beginning:
|
||||
[https://www.youtube.com/watch?v=t_YnACEPmrM](https://www.youtube.com/watch?v=t_YnACEPmrM)
|
||||
|
||||
Additional info: [https://askubuntu.com/questions/1413377/camera-not-working-in-cheese-in-wsl2](https://askubuntu.com/questions/1413377/camera-not-working-in-cheese-in-wsl2)
|
||||
|
||||
3. After rebuilding, restart WSL with the new kernel.
|
||||
|
||||
**4. Set Up Deep-Live-Cam Project**
|
||||
Within Ubuntu:
|
||||
1. Clone the repository:
|
||||
|
||||
```bash
|
||||
git clone [https://github.com/hacksider/Deep-Live-Cam](https://github.com/hacksider/Deep-Live-Cam)
|
||||
```
|
||||
|
||||
2. Follow the installation instructions in the repository, including cuda toolkit 11.8, make 100% sure it's not cuda toolkit 12.x.
|
||||
|
||||
**5. Verify and Load Kernel Modules**
|
||||
|
||||
1. Check if USB and webcam modules are built into the kernel:
|
||||
|
||||
```bash
|
||||
zcat /proc/config.gz | grep -i "CONFIG_USB_VIDEO_CLASS"
|
||||
```
|
||||
|
||||
2. If modules are loadable (m), not built-in (y), check if the file exists:
|
||||
|
||||
```bash
|
||||
ls /lib/modules/$(uname -r)/kernel/drivers/media/usb/uvc/
|
||||
```
|
||||
|
||||
3. Load the module and check for errors (optional if built-in):
|
||||
|
||||
```bash
|
||||
sudo modprobe uvcvideo
|
||||
dmesg | tail
|
||||
```
|
||||
|
||||
4. Verify video devices:
|
||||
|
||||
```bash
|
||||
sudo ls -al /dev/video*
|
||||
```
|
||||
|
||||
**6. Set Up Permissions**
|
||||
|
||||
1. Add user to video group and set permissions:
|
||||
|
||||
```bash
|
||||
sudo usermod -a -G video $USER
|
||||
sudo chgrp video /dev/video0 /dev/video1
|
||||
sudo chmod 660 /dev/video0 /dev/video1
|
||||
```
|
||||
|
||||
2. Create a udev rule for permanent permissions:
|
||||
|
||||
```bash
|
||||
sudo nano /etc/udev/rules.d/81-webcam.rules
|
||||
```
|
||||
|
||||
Add this content:
|
||||
|
||||
```
|
||||
KERNEL=="video[0-9]*", GROUP="video", MODE="0660"
|
||||
```
|
||||
|
||||
3. Reload udev rules:
|
||||
|
||||
```bash
|
||||
sudo udevadm control --reload-rules && sudo udevadm trigger
|
||||
```
|
||||
|
||||
4. Log out and log back into your WSL session.
|
||||
|
||||
5. Start Deep-Live-Cam with `python run.py --execution-provider cuda --max-memory 8` where 8 can be changed to the number of GB VRAM of your GPU has, minus 1-2GB. If you have a RTX3080 with 10GB I suggest adding 8GB. Leave some left for Windows.
|
||||
|
||||
**Final Notes**
|
||||
|
||||
- Steps 6 and 7 may be optional if the modules are built into the kernel and permissions are already set correctly.
|
||||
- Always ensure you're using compatible versions of CUDA, ONNX, and other dependencies.
|
||||
- If issues persist, consider checking the Deep-Live-Cam project's specific requirements and troubleshooting steps.
|
||||
|
||||
By following these steps, you should have a WSL2 Ubuntu environment with USB webcam support ready for the Deep-Live-Cam project. If you encounter any issues, refer back to the specific error messages and troubleshooting steps provided.
|
||||
|
||||
**Troubleshooting CUDA Issues**
|
||||
|
||||
If you encounter this error:
|
||||
|
||||
```
|
||||
[ONNXRuntimeError] : 1 : FAIL : Failed to load library [libonnxruntime_providers_cuda.so](http://libonnxruntime_providers_cuda.so/) with error: libcufft.so.10: cannot open shared object file: No such file or directory
|
||||
```
|
||||
|
||||
Follow these steps:
|
||||
|
||||
1. Install CUDA Toolkit 11.8 (ONNX 1.16.3 requires CUDA 11.x, not 12.x):
|
||||
[https://developer.nvidia.com/cuda-11-8-0-download-archive](https://developer.nvidia.com/cuda-11-8-0-download-archive)
|
||||
select: Linux, x86_64, WSL-Ubuntu, 2.0, deb (local)
|
||||
2. Check CUDA version:
|
||||
|
||||
```bash
|
||||
/usr/local/cuda/bin/nvcc --version
|
||||
```
|
||||
|
||||
3. If the wrong version is installed, remove it completely:
|
||||
[https://askubuntu.com/questions/530043/removing-nvidia-cuda-toolkit-and-installing-new-one](https://askubuntu.com/questions/530043/removing-nvidia-cuda-toolkit-and-installing-new-one)
|
||||
|
||||
4. Install CUDA Toolkit 11.8 again [https://developer.nvidia.com/cuda-11-8-0-download-archive](https://developer.nvidia.com/cuda-11-8-0-download-archive), select: Linux, x86_64, WSL-Ubuntu, 2.0, deb (local)
|
||||
|
||||
```bash
|
||||
sudo apt-get -y install cuda-toolkit-11-8
|
||||
```
|
||||
</details>
|
||||
|
||||
|
||||
## Future Updates & Roadmap
|
||||
|
||||
For the latest experimental builds and features, see the [experimental branch](https://github.com/hacksider/Deep-Live-Cam/tree/experimental).
|
||||
|
||||
**TODO:**
|
||||
|
||||
- [ ] Develop a version for web app/service
|
||||
- [ ] Speed up model loading
|
||||
- [ ] Speed up real-time face swapping
|
||||
- [x] Support multiple faces
|
||||
- [x] UI/UX enhancements for desktop app
|
||||
|
||||
This is an open-source project developed in our free time. Updates may be delayed.
|
||||
|
||||
**Tips and Links:**
|
||||
- [How to make the most of Deep-Live-Cam](https://hacksider.gumroad.com/p/how-to-make-the-most-on-deep-live-cam)
|
||||
- Face enhancer is good, but still very slow for any live streaming purpose.
|
||||
**We are always open to criticism and are ready to improve, that's why we didn't cherry-pick anything.**
|
||||
|
||||
- [*"Deep-Live-Cam goes viral, allowing anyone to become a digital doppelganger"*](https://arstechnica.com/information-technology/2024/08/new-ai-tool-enables-real-time-face-swapping-on-webcams-raising-fraud-concerns/) - Ars Technica
|
||||
- [*"Thanks Deep Live Cam, shapeshifters are among us now"*](https://dataconomy.com/2024/08/15/what-is-deep-live-cam-github-deepfake/) - Dataconomy
|
||||
- [*"This free AI tool lets you become anyone during video-calls"*](https://www.newsbytesapp.com/news/science/deep-live-cam-ai-impersonation-tool-goes-viral/story) - NewsBytes
|
||||
- [*"OK, this viral AI live stream software is truly terrifying"*](https://www.creativebloq.com/ai/ok-this-viral-ai-live-stream-software-is-truly-terrifying) - Creative Bloq
|
||||
- [*"Deepfake AI Tool Lets You Become Anyone in a Video Call With Single Photo"*](https://petapixel.com/2024/08/14/deep-live-cam-deepfake-ai-tool-lets-you-become-anyone-in-a-video-call-with-single-photo-mark-zuckerberg-jd-vance-elon-musk/) - PetaPixel
|
||||
- [*"Deep-Live-Cam Uses AI to Transform Your Face in Real-Time, Celebrities Included"*](https://www.techeblog.com/deep-live-cam-ai-transform-face/) - TechEBlog
|
||||
- [*"An AI tool that "makes you look like anyone" during a video call is going viral online"*](https://telegrafi.com/en/a-tool-that-makes-you-look-like-anyone-during-a-video-call-is-going-viral-on-the-Internet/) - Telegrafi
|
||||
- [*"This Deepfake Tool Turning Images Into Livestreams is Topping the GitHub Charts"*](https://decrypt.co/244565/this-deepfake-tool-turning-images-into-livestreams-is-topping-the-github-charts) - Emerge
|
||||
- [*"New Real-Time Face-Swapping AI Allows Anyone to Mimic Famous Faces"*](https://www.digitalmusicnews.com/2024/08/15/face-swapping-ai-real-time-mimic/) - Digital Music News
|
||||
- [*"This real-time webcam deepfake tool raises alarms about the future of identity theft"*](https://www.diyphotography.net/this-real-time-webcam-deepfake-tool-raises-alarms-about-the-future-of-identity-theft/) - DIYPhotography
|
||||
- [*"That's Crazy, Oh God. That's Fucking Freaky Dude... That's So Wild Dude"*](https://www.youtube.com/watch?time_continue=1074&v=py4Tc-Y8BcY) - SomeOrdinaryGamers
|
||||
- [*"Alright look look look, now look chat, we can do any face we want to look like chat"*](https://www.youtube.com/live/mFsCe7AIxq8?feature=shared&t=2686) - IShowSpeed
|
||||
|
||||
## Credits
|
||||
|
||||
- [ffmpeg](https://ffmpeg.org/): for making video related operations easy
|
||||
- [deepinsight](https://github.com/deepinsight): for their [insightface](https://github.com/deepinsight/insightface) project which provided a well-made library and models. Please be reminded that the [use of the model is for non-commercial research purposes only](https://github.com/deepinsight/insightface?tab=readme-ov-file#license).
|
||||
- [havok2-htwo](https://github.com/havok2-htwo) : for sharing the code for webcam
|
||||
- [GosuDRM](https://github.com/GosuDRM) : for open version of roop
|
||||
- [pereiraroland26](https://github.com/pereiraroland26) : Multiple faces support
|
||||
- [vic4key](https://github.com/vic4key) : For supporting/contributing on this project
|
||||
- [KRSHH](https://github.com/KRSHH) : For his contributions
|
||||
- and [all developers](https://github.com/hacksider/Deep-Live-Cam/graphs/contributors) behind libraries used in this project.
|
||||
- Foot Note: Please be informed that the base author of the code is [s0md3v](https://github.com/s0md3v/roop)
|
||||
- [ffmpeg](https://ffmpeg.org/): for making video-related operations easy
|
||||
- [deepinsight](https://github.com/deepinsight): for their [insightface](https://github.com/deepinsight/insightface) project which provided a well-made library and models. Please be reminded that the [use of the model is for non-commercial research purposes only](https://github.com/deepinsight/insightface?tab=readme-ov-file#license).
|
||||
- [havok2-htwo](https://github.com/havok2-htwo): for sharing the code for webcam
|
||||
- [GosuDRM](https://github.com/GosuDRM): for the open version of roop
|
||||
- [pereiraroland26](https://github.com/pereiraroland26): Multiple faces support
|
||||
- [vic4key](https://github.com/vic4key): For supporting/contributing to this project
|
||||
- [kier007](https://github.com/kier007): for improving the user experience
|
||||
- [qitianai](https://github.com/qitianai): for multi-lingual support
|
||||
- and [all developers](https://github.com/hacksider/Deep-Live-Cam/graphs/contributors) behind libraries used in this project.
|
||||
- Footnote: Please be informed that the base author of the code is [s0md3v](https://github.com/s0md3v/roop)
|
||||
- All the wonderful users who helped make this project go viral by starring the repo ❤️
|
||||
|
||||
[](https://github.com/hacksider/Deep-Live-Cam/stargazers)
|
||||
|
||||
## Contributions
|
||||
|
||||

|
||||
## Star History
|
||||
|
||||
## Stars to the Moon 🚀
|
||||
|
||||
<a href="https://star-history.com/#hacksider/deep-live-cam&Date">
|
||||
<picture>
|
||||
|
|
|
@ -0,0 +1,46 @@
|
|||
{
|
||||
"Source x Target Mapper": "Source x Target Mapper",
|
||||
"select an source image": "选择一个源图像",
|
||||
"Preview": "预览",
|
||||
"select an target image or video": "选择一个目标图像或视频",
|
||||
"save image output file": "保存图像输出文件",
|
||||
"save video output file": "保存视频输出文件",
|
||||
"select an target image": "选择一个目标图像",
|
||||
"source": "源",
|
||||
"Select a target": "选择一个目标",
|
||||
"Select a face": "选择一张脸",
|
||||
"Keep audio": "保留音频",
|
||||
"Face Enhancer": "面纹增强器",
|
||||
"Many faces": "多脸",
|
||||
"Show FPS": "显示帧率",
|
||||
"Keep fps": "保持帧率",
|
||||
"Keep frames": "保持帧数",
|
||||
"Fix Blueish Cam": "修复偏蓝的摄像头",
|
||||
"Mouth Mask": "口罩",
|
||||
"Show Mouth Mask Box": "显示口罩盒",
|
||||
"Start": "开始",
|
||||
"Live": "直播",
|
||||
"Destroy": "结束",
|
||||
"Map faces": "识别人脸",
|
||||
"Processing...": "处理中...",
|
||||
"Processing succeed!": "处理成功!",
|
||||
"Processing ignored!": "处理被忽略!",
|
||||
"Failed to start camera": "启动相机失败",
|
||||
"Please complete pop-up or close it.": "请先完成弹出窗口或者关闭它",
|
||||
"Getting unique faces": "获取独特面部",
|
||||
"Please select a source image first": "请先选择一个源图像",
|
||||
"No faces found in target": "目标图像中没有人脸",
|
||||
"Add": "添加",
|
||||
"Clear": "清除",
|
||||
"Submit": "确认",
|
||||
"Select source image": "请选取源图像",
|
||||
"Select target image": "请选取目标图像",
|
||||
"Please provide mapping!": "请提供映射",
|
||||
"Atleast 1 source with target is required!": "至少需要一个来源图像与目标图像相关!",
|
||||
"At least 1 source with target is required!": "至少需要一个来源图像与目标图像相关!",
|
||||
"Face could not be detected in last upload!": "最近上传的图像中没有检测到人脸!",
|
||||
"Select Camera:": "选择摄像头",
|
||||
"All mappings cleared!": "所有映射均已清除!",
|
||||
"Mappings successfully submitted!": "成功提交映射!",
|
||||
"Source x Target Mapper is already open.": "源 x 目标映射器已打开。"
|
||||
}
|
After Width: | Height: | Size: 2.8 MiB |
BIN
media/demo.mp4
Before Width: | Height: | Size: 76 KiB |
Before Width: | Height: | Size: 104 KiB |
Before Width: | Height: | Size: 4.0 MiB |
Before Width: | Height: | Size: 8.6 MiB After Width: | Height: | Size: 8.2 MiB |
After Width: | Height: | Size: 5.3 MiB |
After Width: | Height: | Size: 5.0 MiB |
BIN
media/movie.gif
Before Width: | Height: | Size: 1.6 MiB After Width: | Height: | Size: 14 MiB |
Before Width: | Height: | Size: 794 KiB |
Before Width: | Height: | Size: 4.3 MiB |
After Width: | Height: | Size: 13 MiB |
|
@ -41,8 +41,10 @@ def parse_args() -> None:
|
|||
program.add_argument('--many-faces', help='process every face', dest='many_faces', action='store_true', default=False)
|
||||
program.add_argument('--nsfw-filter', help='filter the NSFW image or video', dest='nsfw_filter', action='store_true', default=False)
|
||||
program.add_argument('--map-faces', help='map source target faces', dest='map_faces', action='store_true', default=False)
|
||||
program.add_argument('--mouth-mask', help='mask the mouth region', dest='mouth_mask', action='store_true', default=False)
|
||||
program.add_argument('--video-encoder', help='adjust output video encoder', dest='video_encoder', default='libx264', choices=['libx264', 'libx265', 'libvpx-vp9'])
|
||||
program.add_argument('--video-quality', help='adjust output video quality', dest='video_quality', type=int, default=18, choices=range(52), metavar='[0-51]')
|
||||
program.add_argument('-l', '--lang', help='Ui language', default="en")
|
||||
program.add_argument('--live-mirror', help='The live camera display as you see it in the front-facing camera frame', dest='live_mirror', action='store_true', default=False)
|
||||
program.add_argument('--live-resizable', help='The live camera frame is resizable', dest='live_resizable', action='store_true', default=False)
|
||||
program.add_argument('--max-memory', help='maximum amount of RAM in GB', dest='max_memory', type=int, default=suggest_max_memory())
|
||||
|
@ -67,6 +69,7 @@ def parse_args() -> None:
|
|||
modules.globals.keep_audio = args.keep_audio
|
||||
modules.globals.keep_frames = args.keep_frames
|
||||
modules.globals.many_faces = args.many_faces
|
||||
modules.globals.mouth_mask = args.mouth_mask
|
||||
modules.globals.nsfw_filter = args.nsfw_filter
|
||||
modules.globals.map_faces = args.map_faces
|
||||
modules.globals.video_encoder = args.video_encoder
|
||||
|
@ -76,6 +79,7 @@ def parse_args() -> None:
|
|||
modules.globals.max_memory = args.max_memory
|
||||
modules.globals.execution_providers = decode_execution_providers(args.execution_provider)
|
||||
modules.globals.execution_threads = args.execution_threads
|
||||
modules.globals.lang = args.lang
|
||||
|
||||
#for ENHANCER tumbler:
|
||||
if 'face_enhancer' in args.frame_processor:
|
||||
|
@ -251,5 +255,5 @@ def run() -> None:
|
|||
if modules.globals.headless:
|
||||
start()
|
||||
else:
|
||||
window = ui.init(start, destroy)
|
||||
window = ui.init(start, destroy, modules.globals.lang)
|
||||
window.mainloop()
|
||||
|
|
|
@ -39,13 +39,13 @@ def get_many_faces(frame: Frame) -> Any:
|
|||
return None
|
||||
|
||||
def has_valid_map() -> bool:
|
||||
for map in modules.globals.souce_target_map:
|
||||
for map in modules.globals.source_target_map:
|
||||
if "source" in map and "target" in map:
|
||||
return True
|
||||
return False
|
||||
|
||||
def default_source_face() -> Any:
|
||||
for map in modules.globals.souce_target_map:
|
||||
for map in modules.globals.source_target_map:
|
||||
if "source" in map:
|
||||
return map['source']['face']
|
||||
return None
|
||||
|
@ -53,7 +53,7 @@ def default_source_face() -> Any:
|
|||
def simplify_maps() -> Any:
|
||||
centroids = []
|
||||
faces = []
|
||||
for map in modules.globals.souce_target_map:
|
||||
for map in modules.globals.source_target_map:
|
||||
if "source" in map and "target" in map:
|
||||
centroids.append(map['target']['face'].normed_embedding)
|
||||
faces.append(map['source']['face'])
|
||||
|
@ -64,10 +64,10 @@ def simplify_maps() -> Any:
|
|||
def add_blank_map() -> Any:
|
||||
try:
|
||||
max_id = -1
|
||||
if len(modules.globals.souce_target_map) > 0:
|
||||
max_id = max(modules.globals.souce_target_map, key=lambda x: x['id'])['id']
|
||||
if len(modules.globals.source_target_map) > 0:
|
||||
max_id = max(modules.globals.source_target_map, key=lambda x: x['id'])['id']
|
||||
|
||||
modules.globals.souce_target_map.append({
|
||||
modules.globals.source_target_map.append({
|
||||
'id' : max_id + 1
|
||||
})
|
||||
except ValueError:
|
||||
|
@ -75,14 +75,14 @@ def add_blank_map() -> Any:
|
|||
|
||||
def get_unique_faces_from_target_image() -> Any:
|
||||
try:
|
||||
modules.globals.souce_target_map = []
|
||||
modules.globals.source_target_map = []
|
||||
target_frame = cv2.imread(modules.globals.target_path)
|
||||
many_faces = get_many_faces(target_frame)
|
||||
i = 0
|
||||
|
||||
for face in many_faces:
|
||||
x_min, y_min, x_max, y_max = face['bbox']
|
||||
modules.globals.souce_target_map.append({
|
||||
modules.globals.source_target_map.append({
|
||||
'id' : i,
|
||||
'target' : {
|
||||
'cv2' : target_frame[int(y_min):int(y_max), int(x_min):int(x_max)],
|
||||
|
@ -96,7 +96,7 @@ def get_unique_faces_from_target_image() -> Any:
|
|||
|
||||
def get_unique_faces_from_target_video() -> Any:
|
||||
try:
|
||||
modules.globals.souce_target_map = []
|
||||
modules.globals.source_target_map = []
|
||||
frame_face_embeddings = []
|
||||
face_embeddings = []
|
||||
|
||||
|
@ -127,7 +127,7 @@ def get_unique_faces_from_target_video() -> Any:
|
|||
face['target_centroid'] = closest_centroid_index
|
||||
|
||||
for i in range(len(centroids)):
|
||||
modules.globals.souce_target_map.append({
|
||||
modules.globals.source_target_map.append({
|
||||
'id' : i
|
||||
})
|
||||
|
||||
|
@ -135,7 +135,7 @@ def get_unique_faces_from_target_video() -> Any:
|
|||
for frame in tqdm(frame_face_embeddings, desc=f"Mapping frame embeddings to centroids-{i}"):
|
||||
temp.append({'frame': frame['frame'], 'faces': [face for face in frame['faces'] if face['target_centroid'] == i], 'location': frame['location']})
|
||||
|
||||
modules.globals.souce_target_map[i]['target_faces_in_frame'] = temp
|
||||
modules.globals.source_target_map[i]['target_faces_in_frame'] = temp
|
||||
|
||||
# dump_faces(centroids, frame_face_embeddings)
|
||||
default_target_face()
|
||||
|
@ -144,7 +144,7 @@ def get_unique_faces_from_target_video() -> Any:
|
|||
|
||||
|
||||
def default_target_face():
|
||||
for map in modules.globals.souce_target_map:
|
||||
for map in modules.globals.source_target_map:
|
||||
best_face = None
|
||||
best_frame = None
|
||||
for frame in map['target_faces_in_frame']:
|
||||
|
|
|
@ -0,0 +1,26 @@
|
|||
import json
|
||||
from pathlib import Path
|
||||
|
||||
class LanguageManager:
|
||||
def __init__(self, default_language="en"):
|
||||
self.current_language = default_language
|
||||
self.translations = {}
|
||||
self.load_language(default_language)
|
||||
|
||||
def load_language(self, language_code) -> bool:
|
||||
"""load language file"""
|
||||
if language_code == "en":
|
||||
return True
|
||||
try:
|
||||
file_path = Path(__file__).parent.parent / f"locales/{language_code}.json"
|
||||
with open(file_path, "r", encoding="utf-8") as file:
|
||||
self.translations = json.load(file)
|
||||
self.current_language = language_code
|
||||
return True
|
||||
except FileNotFoundError:
|
||||
print(f"Language file not found: {language_code}")
|
||||
return False
|
||||
|
||||
def _(self, key, default=None) -> str:
|
||||
"""get translate text"""
|
||||
return self.translations.get(key, default if default else key)
|
|
@ -9,7 +9,7 @@ file_types = [
|
|||
("Video", ("*.mp4", "*.mkv")),
|
||||
]
|
||||
|
||||
souce_target_map = []
|
||||
source_target_map = []
|
||||
simple_map = {}
|
||||
|
||||
source_path = None
|
||||
|
@ -26,7 +26,7 @@ nsfw_filter = False
|
|||
video_encoder = None
|
||||
video_quality = None
|
||||
live_mirror = False
|
||||
live_resizable = False
|
||||
live_resizable = True
|
||||
max_memory = None
|
||||
execution_providers: List[str] = []
|
||||
execution_threads = None
|
||||
|
|
|
@ -1,3 +1,3 @@
|
|||
name = 'Deep Live Cam'
|
||||
version = '1.7.0'
|
||||
edition = 'Portable'
|
||||
name = 'Deep-Live-Cam'
|
||||
version = '1.8'
|
||||
edition = 'GitHub Edition'
|
||||
|
|
|
@ -9,6 +9,8 @@ import modules.processors.frame.core
|
|||
from modules.core import update_status
|
||||
from modules.face_analyser import get_one_face
|
||||
from modules.typing import Frame, Face
|
||||
import platform
|
||||
import torch
|
||||
from modules.utilities import (
|
||||
conditional_download,
|
||||
is_image,
|
||||
|
@ -21,7 +23,10 @@ THREAD_LOCK = threading.Lock()
|
|||
NAME = "DLC.FACE-ENHANCER"
|
||||
|
||||
abs_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
models_dir = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(abs_dir))), 'models')
|
||||
models_dir = os.path.join(
|
||||
os.path.dirname(os.path.dirname(os.path.dirname(abs_dir))), "models"
|
||||
)
|
||||
|
||||
|
||||
def pre_check() -> bool:
|
||||
download_directory_path = models_dir
|
||||
|
@ -48,8 +53,18 @@ def get_face_enhancer() -> Any:
|
|||
|
||||
with THREAD_LOCK:
|
||||
if FACE_ENHANCER is None:
|
||||
model_path = os.path.join(models_dir, 'GFPGANv1.4.pth')
|
||||
FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1) # type: ignore[attr-defined]
|
||||
model_path = os.path.join(models_dir, "GFPGANv1.4.pth")
|
||||
|
||||
match platform.system():
|
||||
case "Darwin": # Mac OS
|
||||
if torch.backends.mps.is_available():
|
||||
mps_device = torch.device("mps")
|
||||
FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1, device=mps_device) # type: ignore[attr-defined]
|
||||
else:
|
||||
FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1) # type: ignore[attr-defined]
|
||||
case _: # Other OS
|
||||
FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1) # type: ignore[attr-defined]
|
||||
|
||||
return FACE_ENHANCER
|
||||
|
||||
|
||||
|
|
|
@ -4,6 +4,7 @@ import insightface
|
|||
import threading
|
||||
import numpy as np
|
||||
import modules.globals
|
||||
import logging
|
||||
import modules.processors.frame.core
|
||||
from modules.core import update_status
|
||||
from modules.face_analyser import get_one_face, get_many_faces, default_source_face
|
||||
|
@ -21,7 +22,10 @@ THREAD_LOCK = threading.Lock()
|
|||
NAME = "DLC.FACE-SWAPPER"
|
||||
|
||||
abs_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
models_dir = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(abs_dir))), 'models')
|
||||
models_dir = os.path.join(
|
||||
os.path.dirname(os.path.dirname(os.path.dirname(abs_dir))), "models"
|
||||
)
|
||||
|
||||
|
||||
def pre_check() -> bool:
|
||||
download_directory_path = abs_dir
|
||||
|
@ -56,7 +60,7 @@ def get_face_swapper() -> Any:
|
|||
|
||||
with THREAD_LOCK:
|
||||
if FACE_SWAPPER is None:
|
||||
model_path = os.path.join(models_dir, 'inswapper_128_fp16.onnx')
|
||||
model_path = os.path.join(models_dir, "inswapper_128_fp16.onnx")
|
||||
FACE_SWAPPER = insightface.model_zoo.get_model(
|
||||
model_path, providers=modules.globals.execution_providers
|
||||
)
|
||||
|
@ -102,24 +106,30 @@ def process_frame(source_face: Face, temp_frame: Frame) -> Frame:
|
|||
many_faces = get_many_faces(temp_frame)
|
||||
if many_faces:
|
||||
for target_face in many_faces:
|
||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||
if source_face and target_face:
|
||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||
else:
|
||||
print("Face detection failed for target/source.")
|
||||
else:
|
||||
target_face = get_one_face(temp_frame)
|
||||
if target_face:
|
||||
if target_face and source_face:
|
||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||
else:
|
||||
logging.error("Face detection failed for target or source.")
|
||||
return temp_frame
|
||||
|
||||
|
||||
|
||||
def process_frame_v2(temp_frame: Frame, temp_frame_path: str = "") -> Frame:
|
||||
if is_image(modules.globals.target_path):
|
||||
if modules.globals.many_faces:
|
||||
source_face = default_source_face()
|
||||
for map in modules.globals.souce_target_map:
|
||||
for map in modules.globals.source_target_map:
|
||||
target_face = map["target"]["face"]
|
||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||
|
||||
elif not modules.globals.many_faces:
|
||||
for map in modules.globals.souce_target_map:
|
||||
for map in modules.globals.source_target_map:
|
||||
if "source" in map:
|
||||
source_face = map["source"]["face"]
|
||||
target_face = map["target"]["face"]
|
||||
|
@ -128,7 +138,7 @@ def process_frame_v2(temp_frame: Frame, temp_frame_path: str = "") -> Frame:
|
|||
elif is_video(modules.globals.target_path):
|
||||
if modules.globals.many_faces:
|
||||
source_face = default_source_face()
|
||||
for map in modules.globals.souce_target_map:
|
||||
for map in modules.globals.source_target_map:
|
||||
target_frame = [
|
||||
f
|
||||
for f in map["target_faces_in_frame"]
|
||||
|
@ -140,7 +150,7 @@ def process_frame_v2(temp_frame: Frame, temp_frame_path: str = "") -> Frame:
|
|||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||
|
||||
elif not modules.globals.many_faces:
|
||||
for map in modules.globals.souce_target_map:
|
||||
for map in modules.globals.source_target_map:
|
||||
if "source" in map:
|
||||
target_frame = [
|
||||
f
|
||||
|
|
334
modules/ui.py
|
@ -7,7 +7,6 @@ from cv2_enumerate_cameras import enumerate_cameras # Add this import
|
|||
from PIL import Image, ImageOps
|
||||
import time
|
||||
import json
|
||||
|
||||
import modules.globals
|
||||
import modules.metadata
|
||||
from modules.face_analyser import (
|
||||
|
@ -26,6 +25,12 @@ from modules.utilities import (
|
|||
resolve_relative_path,
|
||||
has_image_extension,
|
||||
)
|
||||
from modules.video_capture import VideoCapturer
|
||||
from modules.gettext import LanguageManager
|
||||
import platform
|
||||
|
||||
if platform.system() == "Windows":
|
||||
from pygrabber.dshow_graph import FilterGraph
|
||||
|
||||
ROOT = None
|
||||
POPUP = None
|
||||
|
@ -59,6 +64,7 @@ RECENT_DIRECTORY_SOURCE = None
|
|||
RECENT_DIRECTORY_TARGET = None
|
||||
RECENT_DIRECTORY_OUTPUT = None
|
||||
|
||||
_ = None
|
||||
preview_label = None
|
||||
preview_slider = None
|
||||
source_label = None
|
||||
|
@ -73,9 +79,11 @@ target_label_dict_live = {}
|
|||
img_ft, vid_ft = modules.globals.file_types
|
||||
|
||||
|
||||
def init(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.CTk:
|
||||
global ROOT, PREVIEW
|
||||
def init(start: Callable[[], None], destroy: Callable[[], None], lang: str) -> ctk.CTk:
|
||||
global ROOT, PREVIEW, _
|
||||
|
||||
lang_manager = LanguageManager(lang)
|
||||
_ = lang_manager._
|
||||
ROOT = create_root(start, destroy)
|
||||
PREVIEW = create_preview(ROOT)
|
||||
|
||||
|
@ -96,7 +104,7 @@ def save_switch_states():
|
|||
"fp_ui": modules.globals.fp_ui,
|
||||
"show_fps": modules.globals.show_fps,
|
||||
"mouth_mask": modules.globals.mouth_mask,
|
||||
"show_mouth_mask_box": modules.globals.show_mouth_mask_box
|
||||
"show_mouth_mask_box": modules.globals.show_mouth_mask_box,
|
||||
}
|
||||
with open("switch_states.json", "w") as f:
|
||||
json.dump(switch_states, f)
|
||||
|
@ -118,7 +126,9 @@ def load_switch_states():
|
|||
modules.globals.fp_ui = switch_states.get("fp_ui", {"face_enhancer": False})
|
||||
modules.globals.show_fps = switch_states.get("show_fps", False)
|
||||
modules.globals.mouth_mask = switch_states.get("mouth_mask", False)
|
||||
modules.globals.show_mouth_mask_box = switch_states.get("show_mouth_mask_box", False)
|
||||
modules.globals.show_mouth_mask_box = switch_states.get(
|
||||
"show_mouth_mask_box", False
|
||||
)
|
||||
except FileNotFoundError:
|
||||
# If the file doesn't exist, use default values
|
||||
pass
|
||||
|
@ -148,7 +158,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
target_label.place(relx=0.6, rely=0.1, relwidth=0.3, relheight=0.25)
|
||||
|
||||
select_face_button = ctk.CTkButton(
|
||||
root, text="Select a face", cursor="hand2", command=lambda: select_source_path()
|
||||
root, text=_("Select a face"), cursor="hand2", command=lambda: select_source_path()
|
||||
)
|
||||
select_face_button.place(relx=0.1, rely=0.4, relwidth=0.3, relheight=0.1)
|
||||
|
||||
|
@ -159,7 +169,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
|
||||
select_target_button = ctk.CTkButton(
|
||||
root,
|
||||
text="Select a target",
|
||||
text=_("Select a target"),
|
||||
cursor="hand2",
|
||||
command=lambda: select_target_path(),
|
||||
)
|
||||
|
@ -168,7 +178,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
keep_fps_value = ctk.BooleanVar(value=modules.globals.keep_fps)
|
||||
keep_fps_checkbox = ctk.CTkSwitch(
|
||||
root,
|
||||
text="Keep fps",
|
||||
text=_("Keep fps"),
|
||||
variable=keep_fps_value,
|
||||
cursor="hand2",
|
||||
command=lambda: (
|
||||
|
@ -181,7 +191,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
keep_frames_value = ctk.BooleanVar(value=modules.globals.keep_frames)
|
||||
keep_frames_switch = ctk.CTkSwitch(
|
||||
root,
|
||||
text="Keep frames",
|
||||
text=_("Keep frames"),
|
||||
variable=keep_frames_value,
|
||||
cursor="hand2",
|
||||
command=lambda: (
|
||||
|
@ -194,7 +204,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
enhancer_value = ctk.BooleanVar(value=modules.globals.fp_ui["face_enhancer"])
|
||||
enhancer_switch = ctk.CTkSwitch(
|
||||
root,
|
||||
text="Face Enhancer",
|
||||
text=_("Face Enhancer"),
|
||||
variable=enhancer_value,
|
||||
cursor="hand2",
|
||||
command=lambda: (
|
||||
|
@ -207,7 +217,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
keep_audio_value = ctk.BooleanVar(value=modules.globals.keep_audio)
|
||||
keep_audio_switch = ctk.CTkSwitch(
|
||||
root,
|
||||
text="Keep audio",
|
||||
text=_("Keep audio"),
|
||||
variable=keep_audio_value,
|
||||
cursor="hand2",
|
||||
command=lambda: (
|
||||
|
@ -220,7 +230,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
many_faces_value = ctk.BooleanVar(value=modules.globals.many_faces)
|
||||
many_faces_switch = ctk.CTkSwitch(
|
||||
root,
|
||||
text="Many faces",
|
||||
text=_("Many faces"),
|
||||
variable=many_faces_value,
|
||||
cursor="hand2",
|
||||
command=lambda: (
|
||||
|
@ -233,7 +243,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
color_correction_value = ctk.BooleanVar(value=modules.globals.color_correction)
|
||||
color_correction_switch = ctk.CTkSwitch(
|
||||
root,
|
||||
text="Fix Blueish Cam",
|
||||
text=_("Fix Blueish Cam"),
|
||||
variable=color_correction_value,
|
||||
cursor="hand2",
|
||||
command=lambda: (
|
||||
|
@ -250,12 +260,13 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
map_faces = ctk.BooleanVar(value=modules.globals.map_faces)
|
||||
map_faces_switch = ctk.CTkSwitch(
|
||||
root,
|
||||
text="Map faces",
|
||||
text=_("Map faces"),
|
||||
variable=map_faces,
|
||||
cursor="hand2",
|
||||
command=lambda: (
|
||||
setattr(modules.globals, "map_faces", map_faces.get()),
|
||||
save_switch_states(),
|
||||
close_mapper_window() if not map_faces.get() else None
|
||||
),
|
||||
)
|
||||
map_faces_switch.place(relx=0.1, rely=0.75)
|
||||
|
@ -263,7 +274,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
show_fps_value = ctk.BooleanVar(value=modules.globals.show_fps)
|
||||
show_fps_switch = ctk.CTkSwitch(
|
||||
root,
|
||||
text="Show FPS",
|
||||
text=_("Show FPS"),
|
||||
variable=show_fps_value,
|
||||
cursor="hand2",
|
||||
command=lambda: (
|
||||
|
@ -276,7 +287,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
mouth_mask_var = ctk.BooleanVar(value=modules.globals.mouth_mask)
|
||||
mouth_mask_switch = ctk.CTkSwitch(
|
||||
root,
|
||||
text="Mouth Mask",
|
||||
text=_("Mouth Mask"),
|
||||
variable=mouth_mask_var,
|
||||
cursor="hand2",
|
||||
command=lambda: setattr(modules.globals, "mouth_mask", mouth_mask_var.get()),
|
||||
|
@ -286,7 +297,7 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
show_mouth_mask_box_var = ctk.BooleanVar(value=modules.globals.show_mouth_mask_box)
|
||||
show_mouth_mask_box_switch = ctk.CTkSwitch(
|
||||
root,
|
||||
text="Show Mouth Mask Box",
|
||||
text=_("Show Mouth Mask Box"),
|
||||
variable=show_mouth_mask_box_var,
|
||||
cursor="hand2",
|
||||
command=lambda: setattr(
|
||||
|
@ -296,48 +307,59 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
show_mouth_mask_box_switch.place(relx=0.6, rely=0.55)
|
||||
|
||||
start_button = ctk.CTkButton(
|
||||
root, text="Start", cursor="hand2", command=lambda: analyze_target(start, root)
|
||||
root, text=_("Start"), cursor="hand2", command=lambda: analyze_target(start, root)
|
||||
)
|
||||
start_button.place(relx=0.15, rely=0.80, relwidth=0.2, relheight=0.05)
|
||||
|
||||
stop_button = ctk.CTkButton(
|
||||
root, text="Destroy", cursor="hand2", command=lambda: destroy()
|
||||
root, text=_("Destroy"), cursor="hand2", command=lambda: destroy()
|
||||
)
|
||||
stop_button.place(relx=0.4, rely=0.80, relwidth=0.2, relheight=0.05)
|
||||
|
||||
preview_button = ctk.CTkButton(
|
||||
root, text="Preview", cursor="hand2", command=lambda: toggle_preview()
|
||||
root, text=_("Preview"), cursor="hand2", command=lambda: toggle_preview()
|
||||
)
|
||||
preview_button.place(relx=0.65, rely=0.80, relwidth=0.2, relheight=0.05)
|
||||
|
||||
# --- Camera Selection ---
|
||||
camera_label = ctk.CTkLabel(root, text="Select Camera:")
|
||||
camera_label = ctk.CTkLabel(root, text=_("Select Camera:"))
|
||||
camera_label.place(relx=0.1, rely=0.86, relwidth=0.2, relheight=0.05)
|
||||
|
||||
available_cameras = get_available_cameras()
|
||||
# Convert camera indices to strings for CTkOptionMenu
|
||||
available_camera_indices, available_camera_strings = available_cameras
|
||||
camera_variable = ctk.StringVar(
|
||||
value=(
|
||||
available_camera_strings[0]
|
||||
if available_camera_strings
|
||||
else "No cameras found"
|
||||
camera_indices, camera_names = available_cameras
|
||||
|
||||
if not camera_names or camera_names[0] == "No cameras found":
|
||||
camera_variable = ctk.StringVar(value="No cameras found")
|
||||
camera_optionmenu = ctk.CTkOptionMenu(
|
||||
root,
|
||||
variable=camera_variable,
|
||||
values=["No cameras found"],
|
||||
state="disabled",
|
||||
)
|
||||
)
|
||||
camera_optionmenu = ctk.CTkOptionMenu(
|
||||
root, variable=camera_variable, values=available_camera_strings
|
||||
)
|
||||
else:
|
||||
camera_variable = ctk.StringVar(value=camera_names[0])
|
||||
camera_optionmenu = ctk.CTkOptionMenu(
|
||||
root, variable=camera_variable, values=camera_names
|
||||
)
|
||||
|
||||
camera_optionmenu.place(relx=0.35, rely=0.86, relwidth=0.25, relheight=0.05)
|
||||
|
||||
live_button = ctk.CTkButton(
|
||||
root,
|
||||
text="Live",
|
||||
text=_("Live"),
|
||||
cursor="hand2",
|
||||
command=lambda: webcam_preview(
|
||||
root,
|
||||
available_camera_indices[
|
||||
available_camera_strings.index(camera_variable.get())
|
||||
],
|
||||
(
|
||||
camera_indices[camera_names.index(camera_variable.get())]
|
||||
if camera_names and camera_names[0] != "No cameras found"
|
||||
else None
|
||||
),
|
||||
),
|
||||
state=(
|
||||
"normal"
|
||||
if camera_names and camera_names[0] != "No cameras found"
|
||||
else "disabled"
|
||||
),
|
||||
)
|
||||
live_button.place(relx=0.65, rely=0.86, relwidth=0.2, relheight=0.05)
|
||||
|
@ -354,11 +376,20 @@ def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.C
|
|||
text_color=ctk.ThemeManager.theme.get("URL").get("text_color")
|
||||
)
|
||||
donate_label.bind(
|
||||
"<Button>", lambda event: webbrowser.open("https://paypal.me/hacksider")
|
||||
"<Button>", lambda event: webbrowser.open("https://deeplivecam.net")
|
||||
)
|
||||
|
||||
return root
|
||||
|
||||
def close_mapper_window():
|
||||
global POPUP, POPUP_LIVE
|
||||
if POPUP and POPUP.winfo_exists():
|
||||
POPUP.destroy()
|
||||
POPUP = None
|
||||
if POPUP_LIVE and POPUP_LIVE.winfo_exists():
|
||||
POPUP_LIVE.destroy()
|
||||
POPUP_LIVE = None
|
||||
|
||||
|
||||
def analyze_target(start: Callable[[], None], root: ctk.CTk):
|
||||
if POPUP != None and POPUP.winfo_exists():
|
||||
|
@ -366,7 +397,7 @@ def analyze_target(start: Callable[[], None], root: ctk.CTk):
|
|||
return
|
||||
|
||||
if modules.globals.map_faces:
|
||||
modules.globals.souce_target_map = []
|
||||
modules.globals.source_target_map = []
|
||||
|
||||
if is_image(modules.globals.target_path):
|
||||
update_status("Getting unique faces")
|
||||
|
@ -375,8 +406,8 @@ def analyze_target(start: Callable[[], None], root: ctk.CTk):
|
|||
update_status("Getting unique faces")
|
||||
get_unique_faces_from_target_video()
|
||||
|
||||
if len(modules.globals.souce_target_map) > 0:
|
||||
create_source_target_popup(start, root, modules.globals.souce_target_map)
|
||||
if len(modules.globals.source_target_map) > 0:
|
||||
create_source_target_popup(start, root, modules.globals.source_target_map)
|
||||
else:
|
||||
update_status("No faces found in target")
|
||||
else:
|
||||
|
@ -384,12 +415,12 @@ def analyze_target(start: Callable[[], None], root: ctk.CTk):
|
|||
|
||||
|
||||
def create_source_target_popup(
|
||||
start: Callable[[], None], root: ctk.CTk, map: list
|
||||
start: Callable[[], None], root: ctk.CTk, map: list
|
||||
) -> None:
|
||||
global POPUP, popup_status_label
|
||||
|
||||
POPUP = ctk.CTkToplevel(root)
|
||||
POPUP.title("Source x Target Mapper")
|
||||
POPUP.title(_("Source x Target Mapper"))
|
||||
POPUP.geometry(f"{POPUP_WIDTH}x{POPUP_HEIGHT}")
|
||||
POPUP.focus()
|
||||
|
||||
|
@ -413,7 +444,7 @@ def create_source_target_popup(
|
|||
|
||||
button = ctk.CTkButton(
|
||||
scrollable_frame,
|
||||
text="Select source image",
|
||||
text=_("Select source image"),
|
||||
command=lambda id=id: on_button_click(map, id),
|
||||
width=DEFAULT_BUTTON_WIDTH,
|
||||
height=DEFAULT_BUTTON_HEIGHT,
|
||||
|
@ -447,18 +478,18 @@ def create_source_target_popup(
|
|||
popup_status_label.grid(row=1, column=0, pady=15)
|
||||
|
||||
close_button = ctk.CTkButton(
|
||||
POPUP, text="Submit", command=lambda: on_submit_click(start)
|
||||
POPUP, text=_("Submit"), command=lambda: on_submit_click(start)
|
||||
)
|
||||
close_button.grid(row=2, column=0, pady=10)
|
||||
|
||||
|
||||
def update_popup_source(
|
||||
scrollable_frame: ctk.CTkScrollableFrame, map: list, button_num: int
|
||||
scrollable_frame: ctk.CTkScrollableFrame, map: list, button_num: int
|
||||
) -> list:
|
||||
global source_label_dict
|
||||
|
||||
source_path = ctk.filedialog.askopenfilename(
|
||||
title="select an source image",
|
||||
title=_("select an source image"),
|
||||
initialdir=RECENT_DIRECTORY_SOURCE,
|
||||
filetypes=[img_ft],
|
||||
)
|
||||
|
@ -478,7 +509,7 @@ def update_popup_source(
|
|||
x_min, y_min, x_max, y_max = face["bbox"]
|
||||
|
||||
map[button_num]["source"] = {
|
||||
"cv2": cv2_img[int(y_min) : int(y_max), int(x_min) : int(x_max)],
|
||||
"cv2": cv2_img[int(y_min): int(y_max), int(x_min): int(x_max)],
|
||||
"face": face,
|
||||
}
|
||||
|
||||
|
@ -509,7 +540,7 @@ def create_preview(parent: ctk.CTkToplevel) -> ctk.CTkToplevel:
|
|||
|
||||
preview = ctk.CTkToplevel(parent)
|
||||
preview.withdraw()
|
||||
preview.title("Preview")
|
||||
preview.title(_("Preview"))
|
||||
preview.configure()
|
||||
preview.protocol("WM_DELETE_WINDOW", lambda: toggle_preview())
|
||||
preview.resizable(width=True, height=True)
|
||||
|
@ -525,16 +556,16 @@ def create_preview(parent: ctk.CTkToplevel) -> ctk.CTkToplevel:
|
|||
|
||||
|
||||
def update_status(text: str) -> None:
|
||||
status_label.configure(text=text)
|
||||
status_label.configure(text=_(text))
|
||||
ROOT.update()
|
||||
|
||||
|
||||
def update_pop_status(text: str) -> None:
|
||||
popup_status_label.configure(text=text)
|
||||
popup_status_label.configure(text=_(text))
|
||||
|
||||
|
||||
def update_pop_live_status(text: str) -> None:
|
||||
popup_status_label_live.configure(text=text)
|
||||
popup_status_label_live.configure(text=_(text))
|
||||
|
||||
|
||||
def update_tumbler(var: str, value: bool) -> None:
|
||||
|
@ -553,7 +584,7 @@ def select_source_path() -> None:
|
|||
|
||||
PREVIEW.withdraw()
|
||||
source_path = ctk.filedialog.askopenfilename(
|
||||
title="select an source image",
|
||||
title=_("select an source image"),
|
||||
initialdir=RECENT_DIRECTORY_SOURCE,
|
||||
filetypes=[img_ft],
|
||||
)
|
||||
|
@ -596,7 +627,7 @@ def select_target_path() -> None:
|
|||
|
||||
PREVIEW.withdraw()
|
||||
target_path = ctk.filedialog.askopenfilename(
|
||||
title="select an target image or video",
|
||||
title=_("select an target image or video"),
|
||||
initialdir=RECENT_DIRECTORY_TARGET,
|
||||
filetypes=[img_ft, vid_ft],
|
||||
)
|
||||
|
@ -620,7 +651,7 @@ def select_output_path(start: Callable[[], None]) -> None:
|
|||
|
||||
if is_image(modules.globals.target_path):
|
||||
output_path = ctk.filedialog.asksaveasfilename(
|
||||
title="save image output file",
|
||||
title=_("save image output file"),
|
||||
filetypes=[img_ft],
|
||||
defaultextension=".png",
|
||||
initialfile="output.png",
|
||||
|
@ -628,7 +659,7 @@ def select_output_path(start: Callable[[], None]) -> None:
|
|||
)
|
||||
elif is_video(modules.globals.target_path):
|
||||
output_path = ctk.filedialog.asksaveasfilename(
|
||||
title="save video output file",
|
||||
title=_("save video output file"),
|
||||
filetypes=[vid_ft],
|
||||
defaultextension=".mp4",
|
||||
initialfile="output.mp4",
|
||||
|
@ -665,17 +696,21 @@ def check_and_ignore_nsfw(target, destroy: Callable = None) -> bool:
|
|||
|
||||
|
||||
def fit_image_to_size(image, width: int, height: int):
|
||||
if width is None and height is None:
|
||||
if width is None or height is None or width <= 0 or height <= 0:
|
||||
return image
|
||||
h, w, _ = image.shape
|
||||
ratio_h = 0.0
|
||||
ratio_w = 0.0
|
||||
if width > height:
|
||||
ratio_h = height / h
|
||||
else:
|
||||
ratio_w = width / w
|
||||
ratio = max(ratio_w, ratio_h)
|
||||
new_size = (int(ratio * w), int(ratio * h))
|
||||
ratio_w = width / w
|
||||
ratio_h = height / h
|
||||
# Use the smaller ratio to ensure the image fits within the given dimensions
|
||||
ratio = min(ratio_w, ratio_h)
|
||||
|
||||
# Compute new dimensions, ensuring they're at least 1 pixel
|
||||
new_width = max(1, int(ratio * w))
|
||||
new_height = max(1, int(ratio * h))
|
||||
new_size = (new_width, new_height)
|
||||
|
||||
return cv2.resize(image, dsize=new_size)
|
||||
|
||||
|
||||
|
@ -687,7 +722,7 @@ def render_image_preview(image_path: str, size: Tuple[int, int]) -> ctk.CTkImage
|
|||
|
||||
|
||||
def render_video_preview(
|
||||
video_path: str, size: Tuple[int, int], frame_number: int = 0
|
||||
video_path: str, size: Tuple[int, int], frame_number: int = 0
|
||||
) -> ctk.CTkImage:
|
||||
capture = cv2.VideoCapture(video_path)
|
||||
if frame_number:
|
||||
|
@ -727,7 +762,7 @@ def update_preview(frame_number: int = 0) -> None:
|
|||
if modules.globals.nsfw_filter and check_and_ignore_nsfw(temp_frame):
|
||||
return
|
||||
for frame_processor in get_frame_processors_modules(
|
||||
modules.globals.frame_processors
|
||||
modules.globals.frame_processors
|
||||
):
|
||||
temp_frame = frame_processor.process_frame(
|
||||
get_one_face(cv2.imread(modules.globals.source_path)), temp_frame
|
||||
|
@ -743,54 +778,116 @@ def update_preview(frame_number: int = 0) -> None:
|
|||
|
||||
|
||||
def webcam_preview(root: ctk.CTk, camera_index: int):
|
||||
global POPUP_LIVE
|
||||
|
||||
if POPUP_LIVE and POPUP_LIVE.winfo_exists():
|
||||
update_status("Source x Target Mapper is already open.")
|
||||
POPUP_LIVE.focus()
|
||||
return
|
||||
|
||||
if not modules.globals.map_faces:
|
||||
if modules.globals.source_path is None:
|
||||
# No image selected
|
||||
update_status("Please select a source image first")
|
||||
return
|
||||
create_webcam_preview(camera_index)
|
||||
else:
|
||||
modules.globals.souce_target_map = []
|
||||
modules.globals.source_target_map = []
|
||||
create_source_target_popup_for_webcam(
|
||||
root, modules.globals.souce_target_map, camera_index
|
||||
root, modules.globals.source_target_map, camera_index
|
||||
)
|
||||
|
||||
|
||||
|
||||
def get_available_cameras():
|
||||
"""Returns a list of available camera names and indices."""
|
||||
camera_indices = []
|
||||
camera_names = []
|
||||
if platform.system() == "Windows":
|
||||
try:
|
||||
graph = FilterGraph()
|
||||
devices = graph.get_input_devices()
|
||||
|
||||
for camera in enumerate_cameras():
|
||||
cap = cv2.VideoCapture(camera.index)
|
||||
if cap.isOpened():
|
||||
camera_indices.append(camera.index)
|
||||
camera_names.append(camera.name)
|
||||
cap.release()
|
||||
return (camera_indices, camera_names)
|
||||
# Create list of indices and names
|
||||
camera_indices = list(range(len(devices)))
|
||||
camera_names = devices
|
||||
|
||||
# If no cameras found through DirectShow, try OpenCV fallback
|
||||
if not camera_names:
|
||||
# Try to open camera with index -1 and 0
|
||||
test_indices = [-1, 0]
|
||||
working_cameras = []
|
||||
|
||||
for idx in test_indices:
|
||||
cap = cv2.VideoCapture(idx)
|
||||
if cap.isOpened():
|
||||
working_cameras.append(f"Camera {idx}")
|
||||
cap.release()
|
||||
|
||||
if working_cameras:
|
||||
return test_indices[: len(working_cameras)], working_cameras
|
||||
|
||||
# If still no cameras found, return empty lists
|
||||
if not camera_names:
|
||||
return [], ["No cameras found"]
|
||||
|
||||
return camera_indices, camera_names
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error detecting cameras: {str(e)}")
|
||||
return [], ["No cameras found"]
|
||||
else:
|
||||
# Unix-like systems (Linux/Mac) camera detection
|
||||
camera_indices = []
|
||||
camera_names = []
|
||||
|
||||
if platform.system() == "Darwin": # macOS specific handling
|
||||
# Try to open the default FaceTime camera first
|
||||
cap = cv2.VideoCapture(0)
|
||||
if cap.isOpened():
|
||||
camera_indices.append(0)
|
||||
camera_names.append("FaceTime Camera")
|
||||
cap.release()
|
||||
|
||||
# On macOS, additional cameras typically use indices 1 and 2
|
||||
for i in [1, 2]:
|
||||
cap = cv2.VideoCapture(i)
|
||||
if cap.isOpened():
|
||||
camera_indices.append(i)
|
||||
camera_names.append(f"Camera {i}")
|
||||
cap.release()
|
||||
else:
|
||||
# Linux camera detection - test first 10 indices
|
||||
for i in range(10):
|
||||
cap = cv2.VideoCapture(i)
|
||||
if cap.isOpened():
|
||||
camera_indices.append(i)
|
||||
camera_names.append(f"Camera {i}")
|
||||
cap.release()
|
||||
|
||||
if not camera_names:
|
||||
return [], ["No cameras found"]
|
||||
|
||||
return camera_indices, camera_names
|
||||
|
||||
|
||||
def create_webcam_preview(camera_index: int):
|
||||
global preview_label, PREVIEW
|
||||
|
||||
camera = cv2.VideoCapture(camera_index)
|
||||
camera.set(cv2.CAP_PROP_FRAME_WIDTH, PREVIEW_DEFAULT_WIDTH)
|
||||
camera.set(cv2.CAP_PROP_FRAME_HEIGHT, PREVIEW_DEFAULT_HEIGHT)
|
||||
camera.set(cv2.CAP_PROP_FPS, 60)
|
||||
cap = VideoCapturer(camera_index)
|
||||
if not cap.start(PREVIEW_DEFAULT_WIDTH, PREVIEW_DEFAULT_HEIGHT, 60):
|
||||
update_status("Failed to start camera")
|
||||
return
|
||||
|
||||
preview_label.configure(width=PREVIEW_DEFAULT_WIDTH, height=PREVIEW_DEFAULT_HEIGHT)
|
||||
|
||||
PREVIEW.deiconify()
|
||||
|
||||
frame_processors = get_frame_processors_modules(modules.globals.frame_processors)
|
||||
|
||||
source_image = None
|
||||
prev_time = time.time()
|
||||
fps_update_interval = 0.5 # Update FPS every 0.5 seconds
|
||||
fps_update_interval = 0.5
|
||||
frame_count = 0
|
||||
fps = 0
|
||||
|
||||
while camera:
|
||||
ret, frame = camera.read()
|
||||
while True:
|
||||
ret, frame = cap.read()
|
||||
if not ret:
|
||||
break
|
||||
|
||||
|
@ -804,6 +901,11 @@ def create_webcam_preview(camera_index: int):
|
|||
temp_frame, PREVIEW.winfo_width(), PREVIEW.winfo_height()
|
||||
)
|
||||
|
||||
else:
|
||||
temp_frame = fit_image_to_size(
|
||||
temp_frame, PREVIEW.winfo_width(), PREVIEW.winfo_height()
|
||||
)
|
||||
|
||||
if not modules.globals.map_faces:
|
||||
if source_image is None and modules.globals.source_path:
|
||||
source_image = get_one_face(cv2.imread(modules.globals.source_path))
|
||||
|
@ -816,7 +918,6 @@ def create_webcam_preview(camera_index: int):
|
|||
temp_frame = frame_processor.process_frame(source_image, temp_frame)
|
||||
else:
|
||||
modules.globals.target_path = None
|
||||
|
||||
for frame_processor in frame_processors:
|
||||
if frame_processor.NAME == "DLC.FACE-ENHANCER":
|
||||
if modules.globals.fp_ui["face_enhancer"]:
|
||||
|
@ -855,25 +956,25 @@ def create_webcam_preview(camera_index: int):
|
|||
if PREVIEW.state() == "withdrawn":
|
||||
break
|
||||
|
||||
camera.release()
|
||||
cap.release()
|
||||
PREVIEW.withdraw()
|
||||
|
||||
|
||||
def create_source_target_popup_for_webcam(
|
||||
root: ctk.CTk, map: list, camera_index: int
|
||||
root: ctk.CTk, map: list, camera_index: int
|
||||
) -> None:
|
||||
global POPUP_LIVE, popup_status_label_live
|
||||
|
||||
POPUP_LIVE = ctk.CTkToplevel(root)
|
||||
POPUP_LIVE.title("Source x Target Mapper")
|
||||
POPUP_LIVE.title(_("Source x Target Mapper"))
|
||||
POPUP_LIVE.geometry(f"{POPUP_LIVE_WIDTH}x{POPUP_LIVE_HEIGHT}")
|
||||
POPUP_LIVE.focus()
|
||||
|
||||
def on_submit_click():
|
||||
if has_valid_map():
|
||||
POPUP_LIVE.destroy()
|
||||
simplify_maps()
|
||||
create_webcam_preview(camera_index)
|
||||
update_pop_live_status("Mappings successfully submitted!")
|
||||
create_webcam_preview(camera_index) # Open the preview window
|
||||
else:
|
||||
update_pop_live_status("At least 1 source with target is required!")
|
||||
|
||||
|
@ -882,16 +983,43 @@ def create_source_target_popup_for_webcam(
|
|||
refresh_data(map)
|
||||
update_pop_live_status("Please provide mapping!")
|
||||
|
||||
def on_clear_click():
|
||||
clear_source_target_images(map)
|
||||
refresh_data(map)
|
||||
update_pop_live_status("All mappings cleared!")
|
||||
|
||||
popup_status_label_live = ctk.CTkLabel(POPUP_LIVE, text=None, justify="center")
|
||||
popup_status_label_live.grid(row=1, column=0, pady=15)
|
||||
|
||||
add_button = ctk.CTkButton(POPUP_LIVE, text="Add", command=lambda: on_add_click())
|
||||
add_button.place(relx=0.2, rely=0.92, relwidth=0.2, relheight=0.05)
|
||||
add_button = ctk.CTkButton(POPUP_LIVE, text=_("Add"), command=lambda: on_add_click())
|
||||
add_button.place(relx=0.1, rely=0.92, relwidth=0.2, relheight=0.05)
|
||||
|
||||
clear_button = ctk.CTkButton(POPUP_LIVE, text=_("Clear"), command=lambda: on_clear_click())
|
||||
clear_button.place(relx=0.4, rely=0.92, relwidth=0.2, relheight=0.05)
|
||||
|
||||
close_button = ctk.CTkButton(
|
||||
POPUP_LIVE, text="Submit", command=lambda: on_submit_click()
|
||||
POPUP_LIVE, text=_("Submit"), command=lambda: on_submit_click()
|
||||
)
|
||||
close_button.place(relx=0.6, rely=0.92, relwidth=0.2, relheight=0.05)
|
||||
close_button.place(relx=0.7, rely=0.92, relwidth=0.2, relheight=0.05)
|
||||
|
||||
|
||||
|
||||
def clear_source_target_images(map: list):
|
||||
global source_label_dict_live, target_label_dict_live
|
||||
|
||||
for item in map:
|
||||
if "source" in item:
|
||||
del item["source"]
|
||||
if "target" in item:
|
||||
del item["target"]
|
||||
|
||||
for button_num in list(source_label_dict_live.keys()):
|
||||
source_label_dict_live[button_num].destroy()
|
||||
del source_label_dict_live[button_num]
|
||||
|
||||
for button_num in list(target_label_dict_live.keys()):
|
||||
target_label_dict_live[button_num].destroy()
|
||||
del target_label_dict_live[button_num]
|
||||
|
||||
|
||||
def refresh_data(map: list):
|
||||
|
@ -913,7 +1041,7 @@ def refresh_data(map: list):
|
|||
|
||||
button = ctk.CTkButton(
|
||||
scrollable_frame,
|
||||
text="Select source image",
|
||||
text=_("Select source image"),
|
||||
command=lambda id=id: on_sbutton_click(map, id),
|
||||
width=DEFAULT_BUTTON_WIDTH,
|
||||
height=DEFAULT_BUTTON_HEIGHT,
|
||||
|
@ -930,7 +1058,7 @@ def refresh_data(map: list):
|
|||
|
||||
button = ctk.CTkButton(
|
||||
scrollable_frame,
|
||||
text="Select target image",
|
||||
text=_("Select target image"),
|
||||
command=lambda id=id: on_tbutton_click(map, id),
|
||||
width=DEFAULT_BUTTON_WIDTH,
|
||||
height=DEFAULT_BUTTON_HEIGHT,
|
||||
|
@ -975,12 +1103,12 @@ def refresh_data(map: list):
|
|||
|
||||
|
||||
def update_webcam_source(
|
||||
scrollable_frame: ctk.CTkScrollableFrame, map: list, button_num: int
|
||||
scrollable_frame: ctk.CTkScrollableFrame, map: list, button_num: int
|
||||
) -> list:
|
||||
global source_label_dict_live
|
||||
|
||||
source_path = ctk.filedialog.askopenfilename(
|
||||
title="select an source image",
|
||||
title=_("select an source image"),
|
||||
initialdir=RECENT_DIRECTORY_SOURCE,
|
||||
filetypes=[img_ft],
|
||||
)
|
||||
|
@ -1000,7 +1128,7 @@ def update_webcam_source(
|
|||
x_min, y_min, x_max, y_max = face["bbox"]
|
||||
|
||||
map[button_num]["source"] = {
|
||||
"cv2": cv2_img[int(y_min) : int(y_max), int(x_min) : int(x_max)],
|
||||
"cv2": cv2_img[int(y_min): int(y_max), int(x_min): int(x_max)],
|
||||
"face": face,
|
||||
}
|
||||
|
||||
|
@ -1027,12 +1155,12 @@ def update_webcam_source(
|
|||
|
||||
|
||||
def update_webcam_target(
|
||||
scrollable_frame: ctk.CTkScrollableFrame, map: list, button_num: int
|
||||
scrollable_frame: ctk.CTkScrollableFrame, map: list, button_num: int
|
||||
) -> list:
|
||||
global target_label_dict_live
|
||||
|
||||
target_path = ctk.filedialog.askopenfilename(
|
||||
title="select an target image",
|
||||
title=_("select an target image"),
|
||||
initialdir=RECENT_DIRECTORY_SOURCE,
|
||||
filetypes=[img_ft],
|
||||
)
|
||||
|
@ -1052,7 +1180,7 @@ def update_webcam_target(
|
|||
x_min, y_min, x_max, y_max = face["bbox"]
|
||||
|
||||
map[button_num]["target"] = {
|
||||
"cv2": cv2_img[int(y_min) : int(y_max), int(x_min) : int(x_max)],
|
||||
"cv2": cv2_img[int(y_min): int(y_max), int(x_min): int(x_max)],
|
||||
"face": face,
|
||||
}
|
||||
|
||||
|
|
|
@ -12,16 +12,23 @@ from tqdm import tqdm
|
|||
|
||||
import modules.globals
|
||||
|
||||
TEMP_FILE = 'temp.mp4'
|
||||
TEMP_DIRECTORY = 'temp'
|
||||
TEMP_FILE = "temp.mp4"
|
||||
TEMP_DIRECTORY = "temp"
|
||||
|
||||
# monkey patch ssl for mac
|
||||
if platform.system().lower() == 'darwin':
|
||||
if platform.system().lower() == "darwin":
|
||||
ssl._create_default_https_context = ssl._create_unverified_context
|
||||
|
||||
|
||||
def run_ffmpeg(args: List[str]) -> bool:
|
||||
commands = ['ffmpeg', '-hide_banner', '-hwaccel', 'auto', '-loglevel', modules.globals.log_level]
|
||||
commands = [
|
||||
"ffmpeg",
|
||||
"-hide_banner",
|
||||
"-hwaccel",
|
||||
"auto",
|
||||
"-loglevel",
|
||||
modules.globals.log_level,
|
||||
]
|
||||
commands.extend(args)
|
||||
try:
|
||||
subprocess.check_output(commands, stderr=subprocess.STDOUT)
|
||||
|
@ -32,8 +39,19 @@ def run_ffmpeg(args: List[str]) -> bool:
|
|||
|
||||
|
||||
def detect_fps(target_path: str) -> float:
|
||||
command = ['ffprobe', '-v', 'error', '-select_streams', 'v:0', '-show_entries', 'stream=r_frame_rate', '-of', 'default=noprint_wrappers=1:nokey=1', target_path]
|
||||
output = subprocess.check_output(command).decode().strip().split('/')
|
||||
command = [
|
||||
"ffprobe",
|
||||
"-v",
|
||||
"error",
|
||||
"-select_streams",
|
||||
"v:0",
|
||||
"-show_entries",
|
||||
"stream=r_frame_rate",
|
||||
"-of",
|
||||
"default=noprint_wrappers=1:nokey=1",
|
||||
target_path,
|
||||
]
|
||||
output = subprocess.check_output(command).decode().strip().split("/")
|
||||
try:
|
||||
numerator, denominator = map(int, output)
|
||||
return numerator / denominator
|
||||
|
@ -44,25 +62,65 @@ def detect_fps(target_path: str) -> float:
|
|||
|
||||
def extract_frames(target_path: str) -> None:
|
||||
temp_directory_path = get_temp_directory_path(target_path)
|
||||
run_ffmpeg(['-i', target_path, '-pix_fmt', 'rgb24', os.path.join(temp_directory_path, '%04d.png')])
|
||||
run_ffmpeg(
|
||||
[
|
||||
"-i",
|
||||
target_path,
|
||||
"-pix_fmt",
|
||||
"rgb24",
|
||||
os.path.join(temp_directory_path, "%04d.png"),
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
def create_video(target_path: str, fps: float = 30.0) -> None:
|
||||
temp_output_path = get_temp_output_path(target_path)
|
||||
temp_directory_path = get_temp_directory_path(target_path)
|
||||
run_ffmpeg(['-r', str(fps), '-i', os.path.join(temp_directory_path, '%04d.png'), '-c:v', modules.globals.video_encoder, '-crf', str(modules.globals.video_quality), '-pix_fmt', 'yuv420p', '-vf', 'colorspace=bt709:iall=bt601-6-625:fast=1', '-y', temp_output_path])
|
||||
run_ffmpeg(
|
||||
[
|
||||
"-r",
|
||||
str(fps),
|
||||
"-i",
|
||||
os.path.join(temp_directory_path, "%04d.png"),
|
||||
"-c:v",
|
||||
modules.globals.video_encoder,
|
||||
"-crf",
|
||||
str(modules.globals.video_quality),
|
||||
"-pix_fmt",
|
||||
"yuv420p",
|
||||
"-vf",
|
||||
"colorspace=bt709:iall=bt601-6-625:fast=1",
|
||||
"-y",
|
||||
temp_output_path,
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
def restore_audio(target_path: str, output_path: str) -> None:
|
||||
temp_output_path = get_temp_output_path(target_path)
|
||||
done = run_ffmpeg(['-i', temp_output_path, '-i', target_path, '-c:v', 'copy', '-map', '0:v:0', '-map', '1:a:0', '-y', output_path])
|
||||
done = run_ffmpeg(
|
||||
[
|
||||
"-i",
|
||||
temp_output_path,
|
||||
"-i",
|
||||
target_path,
|
||||
"-c:v",
|
||||
"copy",
|
||||
"-map",
|
||||
"0:v:0",
|
||||
"-map",
|
||||
"1:a:0",
|
||||
"-y",
|
||||
output_path,
|
||||
]
|
||||
)
|
||||
if not done:
|
||||
move_temp(target_path, output_path)
|
||||
|
||||
|
||||
def get_temp_frame_paths(target_path: str) -> List[str]:
|
||||
temp_directory_path = get_temp_directory_path(target_path)
|
||||
return glob.glob((os.path.join(glob.escape(temp_directory_path), '*.png')))
|
||||
return glob.glob((os.path.join(glob.escape(temp_directory_path), "*.png")))
|
||||
|
||||
|
||||
def get_temp_directory_path(target_path: str) -> str:
|
||||
|
@ -81,7 +139,9 @@ def normalize_output_path(source_path: str, target_path: str, output_path: str)
|
|||
source_name, _ = os.path.splitext(os.path.basename(source_path))
|
||||
target_name, target_extension = os.path.splitext(os.path.basename(target_path))
|
||||
if os.path.isdir(output_path):
|
||||
return os.path.join(output_path, source_name + '-' + target_name + target_extension)
|
||||
return os.path.join(
|
||||
output_path, source_name + "-" + target_name + target_extension
|
||||
)
|
||||
return output_path
|
||||
|
||||
|
||||
|
@ -108,20 +168,20 @@ def clean_temp(target_path: str) -> None:
|
|||
|
||||
|
||||
def has_image_extension(image_path: str) -> bool:
|
||||
return image_path.lower().endswith(('png', 'jpg', 'jpeg'))
|
||||
return image_path.lower().endswith(("png", "jpg", "jpeg"))
|
||||
|
||||
|
||||
def is_image(image_path: str) -> bool:
|
||||
if image_path and os.path.isfile(image_path):
|
||||
mimetype, _ = mimetypes.guess_type(image_path)
|
||||
return bool(mimetype and mimetype.startswith('image/'))
|
||||
return bool(mimetype and mimetype.startswith("image/"))
|
||||
return False
|
||||
|
||||
|
||||
def is_video(video_path: str) -> bool:
|
||||
if video_path and os.path.isfile(video_path):
|
||||
mimetype, _ = mimetypes.guess_type(video_path)
|
||||
return bool(mimetype and mimetype.startswith('video/'))
|
||||
return bool(mimetype and mimetype.startswith("video/"))
|
||||
return False
|
||||
|
||||
|
||||
|
@ -129,12 +189,20 @@ def conditional_download(download_directory_path: str, urls: List[str]) -> None:
|
|||
if not os.path.exists(download_directory_path):
|
||||
os.makedirs(download_directory_path)
|
||||
for url in urls:
|
||||
download_file_path = os.path.join(download_directory_path, os.path.basename(url))
|
||||
download_file_path = os.path.join(
|
||||
download_directory_path, os.path.basename(url)
|
||||
)
|
||||
if not os.path.exists(download_file_path):
|
||||
request = urllib.request.urlopen(url) # type: ignore[attr-defined]
|
||||
total = int(request.headers.get('Content-Length', 0))
|
||||
with tqdm(total=total, desc='Downloading', unit='B', unit_scale=True, unit_divisor=1024) as progress:
|
||||
urllib.request.urlretrieve(url, download_file_path, reporthook=lambda count, block_size, total_size: progress.update(block_size)) # type: ignore[attr-defined]
|
||||
request = urllib.request.urlopen(url) # type: ignore[attr-defined]
|
||||
total = int(request.headers.get("Content-Length", 0))
|
||||
with tqdm(
|
||||
total=total,
|
||||
desc="Downloading",
|
||||
unit="B",
|
||||
unit_scale=True,
|
||||
unit_divisor=1024,
|
||||
) as progress:
|
||||
urllib.request.urlretrieve(url, download_file_path, reporthook=lambda count, block_size, total_size: progress.update(block_size)) # type: ignore[attr-defined]
|
||||
|
||||
|
||||
def resolve_relative_path(path: str) -> str:
|
||||
|
|
|
@ -0,0 +1,94 @@
|
|||
import cv2
|
||||
import numpy as np
|
||||
from typing import Optional, Tuple, Callable
|
||||
import platform
|
||||
import threading
|
||||
|
||||
# Only import Windows-specific library if on Windows
|
||||
if platform.system() == "Windows":
|
||||
from pygrabber.dshow_graph import FilterGraph
|
||||
|
||||
|
||||
class VideoCapturer:
|
||||
def __init__(self, device_index: int):
|
||||
self.device_index = device_index
|
||||
self.frame_callback = None
|
||||
self._current_frame = None
|
||||
self._frame_ready = threading.Event()
|
||||
self.is_running = False
|
||||
self.cap = None
|
||||
|
||||
# Initialize Windows-specific components if on Windows
|
||||
if platform.system() == "Windows":
|
||||
self.graph = FilterGraph()
|
||||
# Verify device exists
|
||||
devices = self.graph.get_input_devices()
|
||||
if self.device_index >= len(devices):
|
||||
raise ValueError(
|
||||
f"Invalid device index {device_index}. Available devices: {len(devices)}"
|
||||
)
|
||||
|
||||
def start(self, width: int = 960, height: int = 540, fps: int = 60) -> bool:
|
||||
"""Initialize and start video capture"""
|
||||
try:
|
||||
if platform.system() == "Windows":
|
||||
# Windows-specific capture methods
|
||||
capture_methods = [
|
||||
(self.device_index, cv2.CAP_DSHOW), # Try DirectShow first
|
||||
(self.device_index, cv2.CAP_ANY), # Then try default backend
|
||||
(-1, cv2.CAP_ANY), # Try -1 as fallback
|
||||
(0, cv2.CAP_ANY), # Finally try 0 without specific backend
|
||||
]
|
||||
|
||||
for dev_id, backend in capture_methods:
|
||||
try:
|
||||
self.cap = cv2.VideoCapture(dev_id, backend)
|
||||
if self.cap.isOpened():
|
||||
break
|
||||
self.cap.release()
|
||||
except Exception:
|
||||
continue
|
||||
else:
|
||||
# Unix-like systems (Linux/Mac) capture method
|
||||
self.cap = cv2.VideoCapture(self.device_index)
|
||||
|
||||
if not self.cap or not self.cap.isOpened():
|
||||
raise RuntimeError("Failed to open camera")
|
||||
|
||||
# Configure format
|
||||
self.cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
|
||||
self.cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
|
||||
self.cap.set(cv2.CAP_PROP_FPS, fps)
|
||||
|
||||
self.is_running = True
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"Failed to start capture: {str(e)}")
|
||||
if self.cap:
|
||||
self.cap.release()
|
||||
return False
|
||||
|
||||
def read(self) -> Tuple[bool, Optional[np.ndarray]]:
|
||||
"""Read a frame from the camera"""
|
||||
if not self.is_running or self.cap is None:
|
||||
return False, None
|
||||
|
||||
ret, frame = self.cap.read()
|
||||
if ret:
|
||||
self._current_frame = frame
|
||||
if self.frame_callback:
|
||||
self.frame_callback(frame)
|
||||
return True, frame
|
||||
return False, None
|
||||
|
||||
def release(self) -> None:
|
||||
"""Stop capture and release resources"""
|
||||
if self.is_running and self.cap is not None:
|
||||
self.cap.release()
|
||||
self.is_running = False
|
||||
self.cap = None
|
||||
|
||||
def set_frame_callback(self, callback: Callable[[np.ndarray], None]) -> None:
|
||||
"""Set callback for frame processing"""
|
||||
self.frame_callback = callback
|
Before Width: | Height: | Size: 13 KiB |
Before Width: | Height: | Size: 31 KiB |
|
@ -1,23 +1,21 @@
|
|||
--extra-index-url https://download.pytorch.org/whl/cu118
|
||||
|
||||
numpy>=1.23.5,<2
|
||||
opencv-python==4.8.1.78
|
||||
typing-extensions>=4.8.0
|
||||
opencv-python==4.10.0.84
|
||||
cv2_enumerate_cameras==1.1.15
|
||||
onnx==1.16.0
|
||||
insightface==0.7.3
|
||||
psutil==5.9.8
|
||||
tk==0.1.0
|
||||
customtkinter==5.2.2
|
||||
pillow==9.5.0
|
||||
torch==2.0.1+cu118; sys_platform != 'darwin'
|
||||
torch==2.0.1; sys_platform == 'darwin'
|
||||
torchvision==0.15.2+cu118; sys_platform != 'darwin'
|
||||
torchvision==0.15.2; sys_platform == 'darwin'
|
||||
pillow==11.1.0
|
||||
torch==2.5.1+cu118; sys_platform != 'darwin'
|
||||
torch==2.5.1; sys_platform == 'darwin'
|
||||
torchvision==0.20.1; sys_platform != 'darwin'
|
||||
torchvision==0.20.1; sys_platform == 'darwin'
|
||||
onnxruntime-silicon==1.16.3; sys_platform == 'darwin' and platform_machine == 'arm64'
|
||||
onnxruntime-gpu==1.16.3; sys_platform != 'darwin'
|
||||
tensorflow==2.12.1; sys_platform != 'darwin'
|
||||
onnxruntime-gpu==1.17; sys_platform != 'darwin'
|
||||
tensorflow; sys_platform != 'darwin'
|
||||
opennsfw2==0.10.2
|
||||
protobuf==4.23.2
|
||||
tqdm==4.66.4
|
||||
gfpgan==1.3.8
|
||||
tkinterdnd2==0.4.2
|
||||
|
|
|
@ -1 +1 @@
|
|||
python run.py --execution-provider cuda --execution-threads 60 --max-memory 60
|
||||
python run.py --execution-provider cuda
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
python run.py --execution-provider dml
|
|
@ -1 +0,0 @@
|
|||
python run.py --execution-provider dml
|
|
@ -1,13 +0,0 @@
|
|||
@echo off
|
||||
:: Installing Microsoft Visual C++ Runtime - all versions 1.0.1 if it's not already installed
|
||||
choco install vcredist-all
|
||||
:: Installing CUDA if it's not already installed
|
||||
choco install cuda
|
||||
:: Inatalling ffmpeg if it's not already installed
|
||||
choco install ffmpeg
|
||||
:: Installing Python if it's not already installed
|
||||
choco install python -y
|
||||
:: Assuming successful installation, we ensure pip is upgraded
|
||||
python -m ensurepip --upgrade
|
||||
:: Use pip to install the packages listed in 'requirements.txt'
|
||||
pip install -r requirements.txt
|
|
@ -1,122 +0,0 @@
|
|||
@echo off
|
||||
setlocal EnableDelayedExpansion
|
||||
|
||||
:: 1. Setup your platform
|
||||
echo Setting up your platform...
|
||||
|
||||
:: Python
|
||||
where python >nul 2>&1
|
||||
if %ERRORLEVEL% neq 0 (
|
||||
echo Python is not installed. Please install Python 3.10 or later.
|
||||
pause
|
||||
exit /b
|
||||
)
|
||||
|
||||
:: Pip
|
||||
where pip >nul 2>&1
|
||||
if %ERRORLEVEL% neq 0 (
|
||||
echo Pip is not installed. Please install Pip.
|
||||
pause
|
||||
exit /b
|
||||
)
|
||||
|
||||
:: Git
|
||||
where git >nul 2>&1
|
||||
if %ERRORLEVEL% neq 0 (
|
||||
echo Git is not installed. Installing Git...
|
||||
winget install --id Git.Git -e --source winget
|
||||
)
|
||||
|
||||
:: FFMPEG
|
||||
where ffmpeg >nul 2>&1
|
||||
if %ERRORLEVEL% neq 0 (
|
||||
echo FFMPEG is not installed. Installing FFMPEG...
|
||||
winget install --id Gyan.FFmpeg -e --source winget
|
||||
)
|
||||
|
||||
:: Visual Studio 2022 Runtimes
|
||||
echo Installing Visual Studio 2022 Runtimes...
|
||||
winget install --id Microsoft.VC++2015-2022Redist-x64 -e --source winget
|
||||
|
||||
:: 2. Clone Repository
|
||||
if exist Deep-Live-Cam (
|
||||
echo Deep-Live-Cam directory already exists.
|
||||
set /p overwrite="Do you want to overwrite? (Y/N): "
|
||||
if /i "%overwrite%"=="Y" (
|
||||
rmdir /s /q Deep-Live-Cam
|
||||
git clone https://github.com/hacksider/Deep-Live-Cam.git
|
||||
) else (
|
||||
echo Skipping clone, using existing directory.
|
||||
)
|
||||
) else (
|
||||
git clone https://github.com/hacksider/Deep-Live-Cam.git
|
||||
)
|
||||
cd Deep-Live-Cam
|
||||
|
||||
:: 3. Download Models
|
||||
echo Downloading models...
|
||||
mkdir models
|
||||
curl -L -o models/GFPGANv1.4.pth https://path.to.model/GFPGANv1.4.pth
|
||||
curl -L -o models/inswapper_128_fp16.onnx https://path.to.model/inswapper_128_fp16.onnx
|
||||
|
||||
:: 4. Install dependencies
|
||||
echo Creating a virtual environment...
|
||||
python -m venv venv
|
||||
call venv\Scripts\activate
|
||||
|
||||
echo Installing required Python packages...
|
||||
pip install --upgrade pip
|
||||
pip install -r requirements.txt
|
||||
|
||||
echo Setup complete. You can now run the application.
|
||||
|
||||
:: GPU Acceleration Options
|
||||
echo.
|
||||
echo Choose the GPU Acceleration Option if applicable:
|
||||
echo 1. CUDA (Nvidia)
|
||||
echo 2. CoreML (Apple Silicon)
|
||||
echo 3. CoreML (Apple Legacy)
|
||||
echo 4. DirectML (Windows)
|
||||
echo 5. OpenVINO (Intel)
|
||||
echo 6. None
|
||||
set /p choice="Enter your choice (1-6): "
|
||||
|
||||
if "%choice%"=="1" (
|
||||
echo Installing CUDA dependencies...
|
||||
pip uninstall -y onnxruntime onnxruntime-gpu
|
||||
pip install onnxruntime-gpu==1.16.3
|
||||
set exec_provider="cuda"
|
||||
) else if "%choice%"=="2" (
|
||||
echo Installing CoreML (Apple Silicon) dependencies...
|
||||
pip uninstall -y onnxruntime onnxruntime-silicon
|
||||
pip install onnxruntime-silicon==1.13.1
|
||||
set exec_provider="coreml"
|
||||
) else if "%choice%"=="3" (
|
||||
echo Installing CoreML (Apple Legacy) dependencies...
|
||||
pip uninstall -y onnxruntime onnxruntime-coreml
|
||||
pip install onnxruntime-coreml==1.13.1
|
||||
set exec_provider="coreml"
|
||||
) else if "%choice%"=="4" (
|
||||
echo Installing DirectML dependencies...
|
||||
pip uninstall -y onnxruntime onnxruntime-directml
|
||||
pip install onnxruntime-directml==1.15.1
|
||||
set exec_provider="directml"
|
||||
) else if "%choice%"=="5" (
|
||||
echo Installing OpenVINO dependencies...
|
||||
pip uninstall -y onnxruntime onnxruntime-openvino
|
||||
pip install onnxruntime-openvino==1.15.0
|
||||
set exec_provider="openvino"
|
||||
) else (
|
||||
echo Skipping GPU acceleration setup.
|
||||
)
|
||||
|
||||
:: Run the application
|
||||
if defined exec_provider (
|
||||
echo Running the application with %exec_provider% execution provider...
|
||||
python run.py --execution-provider %exec_provider%
|
||||
) else (
|
||||
echo Running the application...
|
||||
python run.py
|
||||
)
|
||||
|
||||
pause
|