Compare commits
No commits in common. "main" and "1.8" have entirely different histories.
modules/processors/frame
90
README.md
90
README.md
|
@ -30,13 +30,18 @@ By using this software, you agree to these terms and commit to using it in a man
|
|||
|
||||
Users are expected to use this software responsibly and legally. If using a real person's face, obtain their consent and clearly label any output as a deepfake when sharing online. We are not responsible for end-user actions.
|
||||
|
||||
## Exclusive v2.0 Quick Start - Pre-built (Windows)
|
||||
|
||||
<a href="https://deeplivecam.net/index.php/quickstart"> <img src="media/Download.png" width="285" height="77" />
|
||||
## Quick Start - Pre-built (Windows / Nvidia)
|
||||
|
||||
##### This is the fastest build you can get if you have a discrete NVIDIA or AMD GPU.
|
||||
<a href="https://hacksider.gumroad.com/l/vccdmm"> <img src="https://github.com/user-attachments/assets/7d993b32-e3e8-4cd3-bbfb-a549152ebdd5" width="285" height="77" />
|
||||
|
||||
##### This is the fastest build you can get if you have a discrete NVIDIA GPU.
|
||||
|
||||
## Quick Start - Pre-built (Mac / Silicon)
|
||||
|
||||
<a href="https://krshh.gumroad.com/l/Deep-Live-Cam-Mac"> <img src="https://github.com/user-attachments/assets/d5d913b5-a7de-4609-96b9-979a5749a703" width="285" height="77" />
|
||||
|
||||
###### These Pre-builts are perfect for non-technical users or those who don't have time to, or can't manually install all the requirements. Just a heads-up: this is an open-source project, so you can also install it manually. This will be 60 days ahead on the open source version.
|
||||
###### These Pre-builts are perfect for non-technical users or those who don’t have time to, or can't manually install all the requirements. Just a heads-up: this is an open-source project, so you can also install it manually.
|
||||
|
||||
## TLDR; Live Deepfake in just 3 Clicks
|
||||

|
||||
|
@ -118,8 +123,7 @@ This is more likely to work on your computer but will be slower as it utilizes t
|
|||
**2. Clone the Repository**
|
||||
|
||||
```bash
|
||||
git clone https://github.com/hacksider/Deep-Live-Cam.git
|
||||
cd Deep-Live-Cam
|
||||
https://github.com/hacksider/Deep-Live-Cam.git
|
||||
```
|
||||
|
||||
**3. Download the Models**
|
||||
|
@ -133,44 +137,14 @@ Place these files in the "**models**" folder.
|
|||
|
||||
We highly recommend using a `venv` to avoid issues.
|
||||
|
||||
For Windows:
|
||||
```bash
|
||||
python -m venv venv
|
||||
venv\Scripts\activate
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
**For macOS:**
|
||||
|
||||
Apple Silicon (M1/M2/M3) requires specific setup:
|
||||
**For macOS:** Install or upgrade the `python-tk` package:
|
||||
|
||||
```bash
|
||||
# Install Python 3.10 (specific version is important)
|
||||
brew install python@3.10
|
||||
|
||||
# Install tkinter package (required for the GUI)
|
||||
brew install python-tk@3.10
|
||||
|
||||
# Create and activate virtual environment with Python 3.10
|
||||
python3.10 -m venv venv
|
||||
source venv/bin/activate
|
||||
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
** In case something goes wrong and you need to reinstall the virtual environment **
|
||||
|
||||
```bash
|
||||
# Deactivate the virtual environment
|
||||
rm -rf venv
|
||||
|
||||
# Reinstall the virtual environment
|
||||
python -m venv venv
|
||||
source venv/bin/activate
|
||||
|
||||
# install the dependencies again
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
**Run:** If you don't have a GPU, you can run Deep-Live-Cam using `python run.py`. Note that initial execution will download models (~300MB).
|
||||
|
@ -195,39 +169,19 @@ python run.py --execution-provider cuda
|
|||
|
||||
**CoreML Execution Provider (Apple Silicon)**
|
||||
|
||||
Apple Silicon (M1/M2/M3) specific installation:
|
||||
|
||||
1. Make sure you've completed the macOS setup above using Python 3.10.
|
||||
2. Install dependencies:
|
||||
1. Install dependencies:
|
||||
|
||||
```bash
|
||||
pip uninstall onnxruntime onnxruntime-silicon
|
||||
pip install onnxruntime-silicon==1.13.1
|
||||
```
|
||||
|
||||
3. Usage (important: specify Python 3.10):
|
||||
2. Usage:
|
||||
|
||||
```bash
|
||||
python3.10 run.py --execution-provider coreml
|
||||
python run.py --execution-provider coreml
|
||||
```
|
||||
|
||||
**Important Notes for macOS:**
|
||||
- You **must** use Python 3.10, not newer versions like 3.11 or 3.13
|
||||
- Always run with `python3.10` command not just `python` if you have multiple Python versions installed
|
||||
- If you get error about `_tkinter` missing, reinstall the tkinter package: `brew reinstall python-tk@3.10`
|
||||
- If you get model loading errors, check that your models are in the correct folder
|
||||
- If you encounter conflicts with other Python versions, consider uninstalling them:
|
||||
```bash
|
||||
# List all installed Python versions
|
||||
brew list | grep python
|
||||
|
||||
# Uninstall conflicting versions if needed
|
||||
brew uninstall --ignore-dependencies python@3.11 python@3.13
|
||||
|
||||
# Keep only Python 3.10
|
||||
brew cleanup
|
||||
```
|
||||
|
||||
**CoreML Execution Provider (Apple Legacy)**
|
||||
|
||||
1. Install dependencies:
|
||||
|
@ -272,6 +226,7 @@ pip install onnxruntime-openvino==1.15.0
|
|||
```bash
|
||||
python run.py --execution-provider openvino
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## Usage
|
||||
|
@ -292,19 +247,6 @@ python run.py --execution-provider openvino
|
|||
- Use a screen capture tool like OBS to stream.
|
||||
- To change the face, select a new source image.
|
||||
|
||||
## Tips and Tricks
|
||||
|
||||
Check out these helpful guides to get the most out of Deep-Live-Cam:
|
||||
|
||||
- [Unlocking the Secrets to the Perfect Deepfake Image](https://deeplivecam.net/index.php/blog/tips-and-tricks/unlocking-the-secrets-to-the-perfect-deepfake-image) - Learn how to create the best deepfake with full head coverage
|
||||
- [Video Call with DeepLiveCam](https://deeplivecam.net/index.php/blog/tips-and-tricks/video-call-with-deeplivecam) - Make your meetings livelier by using DeepLiveCam with OBS and meeting software
|
||||
- [Have a Special Guest!](https://deeplivecam.net/index.php/blog/tips-and-tricks/have-a-special-guest) - Tutorial on how to use face mapping to add special guests to your stream
|
||||
- [Watch Deepfake Movies in Realtime](https://deeplivecam.net/index.php/blog/tips-and-tricks/watch-deepfake-movies-in-realtime) - See yourself star in any video without processing the video
|
||||
- [Better Quality without Sacrificing Speed](https://deeplivecam.net/index.php/blog/tips-and-tricks/better-quality-without-sacrificing-speed) - Tips for achieving better results without impacting performance
|
||||
- [Instant Vtuber!](https://deeplivecam.net/index.php/blog/tips-and-tricks/instant-vtuber) - Create a new persona/vtuber easily using Metahuman Creator
|
||||
|
||||
Visit our [official blog](https://deeplivecam.net/index.php/blog/tips-and-tricks) for more tips and tutorials.
|
||||
|
||||
## Command Line Arguments (Unmaintained)
|
||||
|
||||
```
|
||||
|
@ -378,3 +320,5 @@ Looking for a CLI mode? Using the -s/--source argument will make the run program
|
|||
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=hacksider/deep-live-cam&type=Date" />
|
||||
</picture>
|
||||
</a>
|
||||
|
||||
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 8.7 KiB |
Binary file not shown.
After Width: | Height: | Size: 9.0 KiB |
|
@ -42,29 +42,18 @@ def get_frame_processors_modules(frame_processors: List[str]) -> List[ModuleType
|
|||
|
||||
def set_frame_processors_modules_from_ui(frame_processors: List[str]) -> None:
|
||||
global FRAME_PROCESSORS_MODULES
|
||||
current_processor_names = [proc.__name__.split('.')[-1] for proc in FRAME_PROCESSORS_MODULES]
|
||||
|
||||
for frame_processor, state in modules.globals.fp_ui.items():
|
||||
if state == True and frame_processor not in current_processor_names:
|
||||
if state == True and frame_processor not in frame_processors:
|
||||
frame_processor_module = load_frame_processor_module(frame_processor)
|
||||
FRAME_PROCESSORS_MODULES.append(frame_processor_module)
|
||||
modules.globals.frame_processors.append(frame_processor)
|
||||
if state == False:
|
||||
try:
|
||||
frame_processor_module = load_frame_processor_module(frame_processor)
|
||||
FRAME_PROCESSORS_MODULES.append(frame_processor_module)
|
||||
if frame_processor not in modules.globals.frame_processors:
|
||||
modules.globals.frame_processors.append(frame_processor)
|
||||
except SystemExit:
|
||||
print(f"Warning: Failed to load frame processor {frame_processor} requested by UI state.")
|
||||
except Exception as e:
|
||||
print(f"Warning: Error loading frame processor {frame_processor} requested by UI state: {e}")
|
||||
|
||||
elif state == False and frame_processor in current_processor_names:
|
||||
try:
|
||||
module_to_remove = next((mod for mod in FRAME_PROCESSORS_MODULES if mod.__name__.endswith(f'.{frame_processor}')), None)
|
||||
if module_to_remove:
|
||||
FRAME_PROCESSORS_MODULES.remove(module_to_remove)
|
||||
if frame_processor in modules.globals.frame_processors:
|
||||
modules.globals.frame_processors.remove(frame_processor)
|
||||
except Exception as e:
|
||||
print(f"Warning: Error removing frame processor {frame_processor}: {e}")
|
||||
FRAME_PROCESSORS_MODULES.remove(frame_processor_module)
|
||||
modules.globals.frame_processors.remove(frame_processor)
|
||||
except:
|
||||
pass
|
||||
|
||||
def multi_process_frame(source_path: str, temp_frame_paths: List[str], process_frames: Callable[[str, List[str], Any], None], progress: Any = None) -> None:
|
||||
with ThreadPoolExecutor(max_workers=modules.globals.execution_threads) as executor:
|
||||
|
|
|
@ -48,17 +48,6 @@ def pre_start() -> bool:
|
|||
return True
|
||||
|
||||
|
||||
TENSORRT_AVAILABLE = False
|
||||
try:
|
||||
import torch_tensorrt
|
||||
TENSORRT_AVAILABLE = True
|
||||
except ImportError as im:
|
||||
print(f"TensorRT is not available: {im}")
|
||||
pass
|
||||
except Exception as e:
|
||||
print(f"TensorRT is not available: {e}")
|
||||
pass
|
||||
|
||||
def get_face_enhancer() -> Any:
|
||||
global FACE_ENHANCER
|
||||
|
||||
|
@ -66,26 +55,16 @@ def get_face_enhancer() -> Any:
|
|||
if FACE_ENHANCER is None:
|
||||
model_path = os.path.join(models_dir, "GFPGANv1.4.pth")
|
||||
|
||||
selected_device = None
|
||||
device_priority = []
|
||||
match platform.system():
|
||||
case "Darwin": # Mac OS
|
||||
if torch.backends.mps.is_available():
|
||||
mps_device = torch.device("mps")
|
||||
FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1, device=mps_device) # type: ignore[attr-defined]
|
||||
else:
|
||||
FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1) # type: ignore[attr-defined]
|
||||
case _: # Other OS
|
||||
FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1) # type: ignore[attr-defined]
|
||||
|
||||
if TENSORRT_AVAILABLE and torch.cuda.is_available():
|
||||
selected_device = torch.device("cuda")
|
||||
device_priority.append("TensorRT+CUDA")
|
||||
elif torch.cuda.is_available():
|
||||
selected_device = torch.device("cuda")
|
||||
device_priority.append("CUDA")
|
||||
elif torch.backends.mps.is_available() and platform.system() == "Darwin":
|
||||
selected_device = torch.device("mps")
|
||||
device_priority.append("MPS")
|
||||
elif not torch.cuda.is_available():
|
||||
selected_device = torch.device("cpu")
|
||||
device_priority.append("CPU")
|
||||
|
||||
FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1, device=selected_device)
|
||||
|
||||
# for debug:
|
||||
print(f"Selected device: {selected_device} and device priority: {device_priority}")
|
||||
return FACE_ENHANCER
|
||||
|
||||
|
||||
|
|
|
@ -4,7 +4,6 @@ import insightface
|
|||
import threading
|
||||
import numpy as np
|
||||
import modules.globals
|
||||
import logging
|
||||
import modules.processors.frame.core
|
||||
from modules.core import update_status
|
||||
from modules.face_analyser import get_one_face, get_many_faces, default_source_face
|
||||
|
@ -106,20 +105,14 @@ def process_frame(source_face: Face, temp_frame: Frame) -> Frame:
|
|||
many_faces = get_many_faces(temp_frame)
|
||||
if many_faces:
|
||||
for target_face in many_faces:
|
||||
if source_face and target_face:
|
||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||
else:
|
||||
print("Face detection failed for target/source.")
|
||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||
else:
|
||||
target_face = get_one_face(temp_frame)
|
||||
if target_face and source_face:
|
||||
if target_face:
|
||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||
else:
|
||||
logging.error("Face detection failed for target or source.")
|
||||
return temp_frame
|
||||
|
||||
|
||||
|
||||
def process_frame_v2(temp_frame: Frame, temp_frame_path: str = "") -> Frame:
|
||||
if is_image(modules.globals.target_path):
|
||||
if modules.globals.many_faces:
|
||||
|
|
|
@ -15,7 +15,11 @@ torch==2.5.1; sys_platform == 'darwin'
|
|||
torchvision==0.20.1; sys_platform != 'darwin'
|
||||
torchvision==0.20.1; sys_platform == 'darwin'
|
||||
onnxruntime-silicon==1.16.3; sys_platform == 'darwin' and platform_machine == 'arm64'
|
||||
onnxruntime-gpu==1.17; sys_platform != 'darwin'
|
||||
onnxruntime-gpu==1.16.3; sys_platform != 'darwin'
|
||||
tensorflow; sys_platform != 'darwin'
|
||||
opennsfw2==0.10.2
|
||||
protobuf==4.23.2
|
||||
tqdm==4.66.4
|
||||
gfpgan==1.3.8
|
||||
tkinterdnd2==0.4.2
|
||||
pygrabber==0.2
|
||||
|
|
Loading…
Reference in New Issue