Compare commits
43 Commits
Author | SHA1 | Date |
---|---|---|
|
d5a3fb0c47 | |
|
9690070399 | |
|
f3e83b985c | |
|
e3e3638b79 | |
|
4a7874a968 | |
|
75122da389 | |
|
7063bba4b3 | |
|
bdbd7dcfbc | |
|
a64940def7 | |
|
fe4a87e8f2 | |
|
9ecd2dab83 | |
|
c9f36eb350 | |
|
b1f610d432 | |
|
d86c36dc47 | |
|
532e7c05ee | |
|
267a273cb2 | |
|
938aa9eaf1 | |
|
37bac27302 | |
|
84836932e6 | |
|
e879d2ca64 | |
|
181144ce33 | |
|
890beb0eae | |
|
75b5b096d6 | |
|
40e47a469c | |
|
874abb4e59 | |
|
18b259da70 | |
|
01900dcfb5 | |
|
07e30fe781 | |
|
3dda4f2179 | |
|
71735e4f60 | |
|
90d5c28542 | |
|
104d8cf4d6 | |
|
ac3696b69d | |
|
76fb209e6c | |
|
2dcd552c4b | |
|
66248a37b4 | |
|
aa9b7ed3b6 | |
|
51a4246050 | |
|
3f1c072fac | |
|
f91f9203e7 | |
|
80477676b4 | |
|
c728994e6b | |
|
65da3be2a4 |
modules
98
README.md
98
README.md
|
@ -30,18 +30,13 @@ By using this software, you agree to these terms and commit to using it in a man
|
|||
|
||||
Users are expected to use this software responsibly and legally. If using a real person's face, obtain their consent and clearly label any output as a deepfake when sharing online. We are not responsible for end-user actions.
|
||||
|
||||
## Exclusive v2.0 Quick Start - Pre-built (Windows)
|
||||
|
||||
## Quick Start - Pre-built (Windows / Nvidia)
|
||||
<a href="https://deeplivecam.net/index.php/quickstart"> <img src="media/Download.png" width="285" height="77" />
|
||||
|
||||
<a href="https://hacksider.gumroad.com/l/vccdmm"> <img src="https://github.com/user-attachments/assets/7d993b32-e3e8-4cd3-bbfb-a549152ebdd5" width="285" height="77" />
|
||||
|
||||
##### This is the fastest build you can get if you have a discrete NVIDIA GPU.
|
||||
|
||||
## Quick Start - Pre-built (Mac / Silicon)
|
||||
|
||||
<a href="https://krshh.gumroad.com/l/Deep-Live-Cam-Mac"> <img src="https://github.com/user-attachments/assets/d5d913b5-a7de-4609-96b9-979a5749a703" width="285" height="77" />
|
||||
##### This is the fastest build you can get if you have a discrete NVIDIA or AMD GPU.
|
||||
|
||||
###### These Pre-builts are perfect for non-technical users or those who don’t have time to, or can't manually install all the requirements. Just a heads-up: this is an open-source project, so you can also install it manually.
|
||||
###### These Pre-builts are perfect for non-technical users or those who don't have time to, or can't manually install all the requirements. Just a heads-up: this is an open-source project, so you can also install it manually. This will be 60 days ahead on the open source version.
|
||||
|
||||
## TLDR; Live Deepfake in just 3 Clicks
|
||||

|
||||
|
@ -123,7 +118,8 @@ This is more likely to work on your computer but will be slower as it utilizes t
|
|||
**2. Clone the Repository**
|
||||
|
||||
```bash
|
||||
https://github.com/hacksider/Deep-Live-Cam.git
|
||||
git clone https://github.com/hacksider/Deep-Live-Cam.git
|
||||
cd Deep-Live-Cam
|
||||
```
|
||||
|
||||
**3. Download the Models**
|
||||
|
@ -137,14 +133,52 @@ Place these files in the "**models**" folder.
|
|||
|
||||
We highly recommend using a `venv` to avoid issues.
|
||||
|
||||
|
||||
For Windows:
|
||||
```bash
|
||||
python -m venv venv
|
||||
venv\Scripts\activate
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
For Linux:
|
||||
```bash
|
||||
# Ensure you use the installed Python 3.10
|
||||
python3 -m venv venv
|
||||
source venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
**For macOS:** Install or upgrade the `python-tk` package:
|
||||
**For macOS:**
|
||||
|
||||
Apple Silicon (M1/M2/M3) requires specific setup:
|
||||
|
||||
```bash
|
||||
# Install Python 3.10 (specific version is important)
|
||||
brew install python@3.10
|
||||
|
||||
# Install tkinter package (required for the GUI)
|
||||
brew install python-tk@3.10
|
||||
|
||||
# Create and activate virtual environment with Python 3.10
|
||||
python3.10 -m venv venv
|
||||
source venv/bin/activate
|
||||
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
** In case something goes wrong and you need to reinstall the virtual environment **
|
||||
|
||||
```bash
|
||||
# Deactivate the virtual environment
|
||||
rm -rf venv
|
||||
|
||||
# Reinstall the virtual environment
|
||||
python -m venv venv
|
||||
source venv/bin/activate
|
||||
|
||||
# install the dependencies again
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
**Run:** If you don't have a GPU, you can run Deep-Live-Cam using `python run.py`. Note that initial execution will download models (~300MB).
|
||||
|
@ -169,19 +203,39 @@ python run.py --execution-provider cuda
|
|||
|
||||
**CoreML Execution Provider (Apple Silicon)**
|
||||
|
||||
1. Install dependencies:
|
||||
Apple Silicon (M1/M2/M3) specific installation:
|
||||
|
||||
1. Make sure you've completed the macOS setup above using Python 3.10.
|
||||
2. Install dependencies:
|
||||
|
||||
```bash
|
||||
pip uninstall onnxruntime onnxruntime-silicon
|
||||
pip install onnxruntime-silicon==1.13.1
|
||||
```
|
||||
|
||||
2. Usage:
|
||||
3. Usage (important: specify Python 3.10):
|
||||
|
||||
```bash
|
||||
python run.py --execution-provider coreml
|
||||
python3.10 run.py --execution-provider coreml
|
||||
```
|
||||
|
||||
**Important Notes for macOS:**
|
||||
- You **must** use Python 3.10, not newer versions like 3.11 or 3.13
|
||||
- Always run with `python3.10` command not just `python` if you have multiple Python versions installed
|
||||
- If you get error about `_tkinter` missing, reinstall the tkinter package: `brew reinstall python-tk@3.10`
|
||||
- If you get model loading errors, check that your models are in the correct folder
|
||||
- If you encounter conflicts with other Python versions, consider uninstalling them:
|
||||
```bash
|
||||
# List all installed Python versions
|
||||
brew list | grep python
|
||||
|
||||
# Uninstall conflicting versions if needed
|
||||
brew uninstall --ignore-dependencies python@3.11 python@3.13
|
||||
|
||||
# Keep only Python 3.10
|
||||
brew cleanup
|
||||
```
|
||||
|
||||
**CoreML Execution Provider (Apple Legacy)**
|
||||
|
||||
1. Install dependencies:
|
||||
|
@ -226,7 +280,6 @@ pip install onnxruntime-openvino==1.15.0
|
|||
```bash
|
||||
python run.py --execution-provider openvino
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## Usage
|
||||
|
@ -247,6 +300,19 @@ python run.py --execution-provider openvino
|
|||
- Use a screen capture tool like OBS to stream.
|
||||
- To change the face, select a new source image.
|
||||
|
||||
## Tips and Tricks
|
||||
|
||||
Check out these helpful guides to get the most out of Deep-Live-Cam:
|
||||
|
||||
- [Unlocking the Secrets to the Perfect Deepfake Image](https://deeplivecam.net/index.php/blog/tips-and-tricks/unlocking-the-secrets-to-the-perfect-deepfake-image) - Learn how to create the best deepfake with full head coverage
|
||||
- [Video Call with DeepLiveCam](https://deeplivecam.net/index.php/blog/tips-and-tricks/video-call-with-deeplivecam) - Make your meetings livelier by using DeepLiveCam with OBS and meeting software
|
||||
- [Have a Special Guest!](https://deeplivecam.net/index.php/blog/tips-and-tricks/have-a-special-guest) - Tutorial on how to use face mapping to add special guests to your stream
|
||||
- [Watch Deepfake Movies in Realtime](https://deeplivecam.net/index.php/blog/tips-and-tricks/watch-deepfake-movies-in-realtime) - See yourself star in any video without processing the video
|
||||
- [Better Quality without Sacrificing Speed](https://deeplivecam.net/index.php/blog/tips-and-tricks/better-quality-without-sacrificing-speed) - Tips for achieving better results without impacting performance
|
||||
- [Instant Vtuber!](https://deeplivecam.net/index.php/blog/tips-and-tricks/instant-vtuber) - Create a new persona/vtuber easily using Metahuman Creator
|
||||
|
||||
Visit our [official blog](https://deeplivecam.net/index.php/blog/tips-and-tricks) for more tips and tutorials.
|
||||
|
||||
## Command Line Arguments (Unmaintained)
|
||||
|
||||
```
|
||||
|
@ -320,5 +386,3 @@ Looking for a CLI mode? Using the -s/--source argument will make the run program
|
|||
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=hacksider/deep-live-cam&type=Date" />
|
||||
</picture>
|
||||
</a>
|
||||
|
||||
|
||||
|
|
|
@ -0,0 +1,46 @@
|
|||
{
|
||||
"Source x Target Mapper": "Quelle x Ziel Zuordnung",
|
||||
"select a source image": "Wähle ein Quellbild",
|
||||
"Preview": "Vorschau",
|
||||
"select a target image or video": "Wähle ein Zielbild oder Video",
|
||||
"save image output file": "Bildausgabedatei speichern",
|
||||
"save video output file": "Videoausgabedatei speichern",
|
||||
"select a target image": "Wähle ein Zielbild",
|
||||
"source": "Quelle",
|
||||
"Select a target": "Wähle ein Ziel",
|
||||
"Select a face": "Wähle ein Gesicht",
|
||||
"Keep audio": "Audio beibehalten",
|
||||
"Face Enhancer": "Gesichtsverbesserung",
|
||||
"Many faces": "Mehrere Gesichter",
|
||||
"Show FPS": "FPS anzeigen",
|
||||
"Keep fps": "FPS beibehalten",
|
||||
"Keep frames": "Frames beibehalten",
|
||||
"Fix Blueish Cam": "Bläuliche Kamera korrigieren",
|
||||
"Mouth Mask": "Mundmaske",
|
||||
"Show Mouth Mask Box": "Mundmaskenrahmen anzeigen",
|
||||
"Start": "Starten",
|
||||
"Live": "Live",
|
||||
"Destroy": "Beenden",
|
||||
"Map faces": "Gesichter zuordnen",
|
||||
"Processing...": "Verarbeitung läuft...",
|
||||
"Processing succeed!": "Verarbeitung erfolgreich!",
|
||||
"Processing ignored!": "Verarbeitung ignoriert!",
|
||||
"Failed to start camera": "Kamera konnte nicht gestartet werden",
|
||||
"Please complete pop-up or close it.": "Bitte das Pop-up komplettieren oder schließen.",
|
||||
"Getting unique faces": "Einzigartige Gesichter erfassen",
|
||||
"Please select a source image first": "Bitte zuerst ein Quellbild auswählen",
|
||||
"No faces found in target": "Keine Gesichter im Zielbild gefunden",
|
||||
"Add": "Hinzufügen",
|
||||
"Clear": "Löschen",
|
||||
"Submit": "Absenden",
|
||||
"Select source image": "Quellbild auswählen",
|
||||
"Select target image": "Zielbild auswählen",
|
||||
"Please provide mapping!": "Bitte eine Zuordnung angeben!",
|
||||
"At least 1 source with target is required!": "Mindestens eine Quelle mit einem Ziel ist erforderlich!",
|
||||
"At least 1 source with target is required!": "Mindestens eine Quelle mit einem Ziel ist erforderlich!",
|
||||
"Face could not be detected in last upload!": "Im letzten Upload konnte kein Gesicht erkannt werden!",
|
||||
"Select Camera:": "Kamera auswählen:",
|
||||
"All mappings cleared!": "Alle Zuordnungen gelöscht!",
|
||||
"Mappings successfully submitted!": "Zuordnungen erfolgreich übermittelt!",
|
||||
"Source x Target Mapper is already open.": "Quell-zu-Ziel-Zuordnung ist bereits geöffnet."
|
||||
}
|
|
@ -0,0 +1,46 @@
|
|||
{
|
||||
"Source x Target Mapper": "Source x Target Kartoitin",
|
||||
"select an source image": "Valitse lähde kuva",
|
||||
"Preview": "Esikatsele",
|
||||
"select an target image or video": "Valitse kohde kuva tai video",
|
||||
"save image output file": "tallenna kuva",
|
||||
"save video output file": "tallenna video",
|
||||
"select an target image": "Valitse kohde kuva",
|
||||
"source": "lähde",
|
||||
"Select a target": "Valitse kohde",
|
||||
"Select a face": "Valitse kasvot",
|
||||
"Keep audio": "Säilytä ääni",
|
||||
"Face Enhancer": "Kasvojen Parantaja",
|
||||
"Many faces": "Useampia kasvoja",
|
||||
"Show FPS": "Näytä FPS",
|
||||
"Keep fps": "Säilytä FPS",
|
||||
"Keep frames": "Säilytä ruudut",
|
||||
"Fix Blueish Cam": "Korjaa Sinertävä Kamera",
|
||||
"Mouth Mask": "Suu Maski",
|
||||
"Show Mouth Mask Box": "Näytä Suu Maski Laatiko",
|
||||
"Start": "Aloita",
|
||||
"Live": "Live",
|
||||
"Destroy": "Tuhoa",
|
||||
"Map faces": "Kartoita kasvot",
|
||||
"Processing...": "Prosessoi...",
|
||||
"Processing succeed!": "Prosessointi onnistui!",
|
||||
"Processing ignored!": "Prosessointi lopetettu!",
|
||||
"Failed to start camera": "Kameran käynnistäminen epäonnistui",
|
||||
"Please complete pop-up or close it.": "Viimeistele tai sulje ponnahdusikkuna",
|
||||
"Getting unique faces": "Hankitaan uniikkeja kasvoja",
|
||||
"Please select a source image first": "Valitse ensin lähde kuva",
|
||||
"No faces found in target": "Kasvoja ei löydetty kohteessa",
|
||||
"Add": "Lisää",
|
||||
"Clear": "Tyhjennä",
|
||||
"Submit": "Lähetä",
|
||||
"Select source image": "Valitse lähde kuva",
|
||||
"Select target image": "Valitse kohde kuva",
|
||||
"Please provide mapping!": "Tarjoa kartoitus!",
|
||||
"Atleast 1 source with target is required!": "Vähintään 1 lähde kohteen kanssa on vaadittu!",
|
||||
"At least 1 source with target is required!": "Vähintään 1 lähde kohteen kanssa on vaadittu!",
|
||||
"Face could not be detected in last upload!": "Kasvoja ei voitu tunnistaa edellisessä latauksessa!",
|
||||
"Select Camera:": "Valitse Kamera:",
|
||||
"All mappings cleared!": "Kaikki kartoitukset tyhjennetty!",
|
||||
"Mappings successfully submitted!": "Kartoitukset lähetety onnistuneesti!",
|
||||
"Source x Target Mapper is already open.": "Lähde x Kohde Kartoittaja on jo auki."
|
||||
}
|
|
@ -1,11 +1,11 @@
|
|||
{
|
||||
"Source x Target Mapper": "Source x Target Mapper",
|
||||
"select an source image": "选择一个源图像",
|
||||
"select a source image": "选择一个源图像",
|
||||
"Preview": "预览",
|
||||
"select an target image or video": "选择一个目标图像或视频",
|
||||
"select a target image or video": "选择一个目标图像或视频",
|
||||
"save image output file": "保存图像输出文件",
|
||||
"save video output file": "保存视频输出文件",
|
||||
"select an target image": "选择一个目标图像",
|
||||
"select a target image": "选择一个目标图像",
|
||||
"source": "源",
|
||||
"Select a target": "选择一个目标",
|
||||
"Select a face": "选择一张脸",
|
||||
|
@ -36,11 +36,11 @@
|
|||
"Select source image": "请选取源图像",
|
||||
"Select target image": "请选取目标图像",
|
||||
"Please provide mapping!": "请提供映射",
|
||||
"Atleast 1 source with target is required!": "至少需要一个来源图像与目标图像相关!",
|
||||
"At least 1 source with target is required!": "至少需要一个来源图像与目标图像相关!",
|
||||
"At least 1 source with target is required!": "至少需要一个来源图像与目标图像相关!",
|
||||
"Face could not be detected in last upload!": "最近上传的图像中没有检测到人脸!",
|
||||
"Select Camera:": "选择摄像头",
|
||||
"All mappings cleared!": "所有映射均已清除!",
|
||||
"Mappings successfully submitted!": "成功提交映射!",
|
||||
"Source x Target Mapper is already open.": "源 x 目标映射器已打开。"
|
||||
}
|
||||
}
|
||||
|
|
Binary file not shown.
After Width: | Height: | Size: 8.7 KiB |
Binary file not shown.
Before Width: | Height: | Size: 9.0 KiB |
|
@ -0,0 +1,18 @@
|
|||
import os
|
||||
import cv2
|
||||
import numpy as np
|
||||
|
||||
# Utility function to support unicode characters in file paths for reading
|
||||
def imread_unicode(path, flags=cv2.IMREAD_COLOR):
|
||||
return cv2.imdecode(np.fromfile(path, dtype=np.uint8), flags)
|
||||
|
||||
# Utility function to support unicode characters in file paths for writing
|
||||
def imwrite_unicode(path, img, params=None):
|
||||
root, ext = os.path.splitext(path)
|
||||
if not ext:
|
||||
ext = ".png"
|
||||
result, encoded_img = cv2.imencode(ext, img, params if params else [])
|
||||
result, encoded_img = cv2.imencode(f".{ext}", img, params if params is not None else [])
|
||||
encoded_img.tofile(path)
|
||||
return True
|
||||
return False
|
|
@ -42,18 +42,29 @@ def get_frame_processors_modules(frame_processors: List[str]) -> List[ModuleType
|
|||
|
||||
def set_frame_processors_modules_from_ui(frame_processors: List[str]) -> None:
|
||||
global FRAME_PROCESSORS_MODULES
|
||||
current_processor_names = [proc.__name__.split('.')[-1] for proc in FRAME_PROCESSORS_MODULES]
|
||||
|
||||
for frame_processor, state in modules.globals.fp_ui.items():
|
||||
if state == True and frame_processor not in frame_processors:
|
||||
frame_processor_module = load_frame_processor_module(frame_processor)
|
||||
FRAME_PROCESSORS_MODULES.append(frame_processor_module)
|
||||
modules.globals.frame_processors.append(frame_processor)
|
||||
if state == False:
|
||||
if state == True and frame_processor not in current_processor_names:
|
||||
try:
|
||||
frame_processor_module = load_frame_processor_module(frame_processor)
|
||||
FRAME_PROCESSORS_MODULES.remove(frame_processor_module)
|
||||
modules.globals.frame_processors.remove(frame_processor)
|
||||
except:
|
||||
pass
|
||||
FRAME_PROCESSORS_MODULES.append(frame_processor_module)
|
||||
if frame_processor not in modules.globals.frame_processors:
|
||||
modules.globals.frame_processors.append(frame_processor)
|
||||
except SystemExit:
|
||||
print(f"Warning: Failed to load frame processor {frame_processor} requested by UI state.")
|
||||
except Exception as e:
|
||||
print(f"Warning: Error loading frame processor {frame_processor} requested by UI state: {e}")
|
||||
|
||||
elif state == False and frame_processor in current_processor_names:
|
||||
try:
|
||||
module_to_remove = next((mod for mod in FRAME_PROCESSORS_MODULES if mod.__name__.endswith(f'.{frame_processor}')), None)
|
||||
if module_to_remove:
|
||||
FRAME_PROCESSORS_MODULES.remove(module_to_remove)
|
||||
if frame_processor in modules.globals.frame_processors:
|
||||
modules.globals.frame_processors.remove(frame_processor)
|
||||
except Exception as e:
|
||||
print(f"Warning: Error removing frame processor {frame_processor}: {e}")
|
||||
|
||||
def multi_process_frame(source_path: str, temp_frame_paths: List[str], process_frames: Callable[[str, List[str], Any], None], progress: Any = None) -> None:
|
||||
with ThreadPoolExecutor(max_workers=modules.globals.execution_threads) as executor:
|
||||
|
|
|
@ -48,6 +48,17 @@ def pre_start() -> bool:
|
|||
return True
|
||||
|
||||
|
||||
TENSORRT_AVAILABLE = False
|
||||
try:
|
||||
import torch_tensorrt
|
||||
TENSORRT_AVAILABLE = True
|
||||
except ImportError as im:
|
||||
print(f"TensorRT is not available: {im}")
|
||||
pass
|
||||
except Exception as e:
|
||||
print(f"TensorRT is not available: {e}")
|
||||
pass
|
||||
|
||||
def get_face_enhancer() -> Any:
|
||||
global FACE_ENHANCER
|
||||
|
||||
|
@ -55,16 +66,26 @@ def get_face_enhancer() -> Any:
|
|||
if FACE_ENHANCER is None:
|
||||
model_path = os.path.join(models_dir, "GFPGANv1.4.pth")
|
||||
|
||||
match platform.system():
|
||||
case "Darwin": # Mac OS
|
||||
if torch.backends.mps.is_available():
|
||||
mps_device = torch.device("mps")
|
||||
FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1, device=mps_device) # type: ignore[attr-defined]
|
||||
else:
|
||||
FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1) # type: ignore[attr-defined]
|
||||
case _: # Other OS
|
||||
FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1) # type: ignore[attr-defined]
|
||||
selected_device = None
|
||||
device_priority = []
|
||||
|
||||
if TENSORRT_AVAILABLE and torch.cuda.is_available():
|
||||
selected_device = torch.device("cuda")
|
||||
device_priority.append("TensorRT+CUDA")
|
||||
elif torch.cuda.is_available():
|
||||
selected_device = torch.device("cuda")
|
||||
device_priority.append("CUDA")
|
||||
elif torch.backends.mps.is_available() and platform.system() == "Darwin":
|
||||
selected_device = torch.device("mps")
|
||||
device_priority.append("MPS")
|
||||
elif not torch.cuda.is_available():
|
||||
selected_device = torch.device("cpu")
|
||||
device_priority.append("CPU")
|
||||
|
||||
FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1, device=selected_device)
|
||||
|
||||
# for debug:
|
||||
print(f"Selected device: {selected_device} and device priority: {device_priority}")
|
||||
return FACE_ENHANCER
|
||||
|
||||
|
||||
|
|
|
@ -4,6 +4,7 @@ import insightface
|
|||
import threading
|
||||
import numpy as np
|
||||
import modules.globals
|
||||
import logging
|
||||
import modules.processors.frame.core
|
||||
from modules.core import update_status
|
||||
from modules.face_analyser import get_one_face, get_many_faces, default_source_face
|
||||
|
@ -105,14 +106,20 @@ def process_frame(source_face: Face, temp_frame: Frame) -> Frame:
|
|||
many_faces = get_many_faces(temp_frame)
|
||||
if many_faces:
|
||||
for target_face in many_faces:
|
||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||
if source_face and target_face:
|
||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||
else:
|
||||
print("Face detection failed for target/source.")
|
||||
else:
|
||||
target_face = get_one_face(temp_frame)
|
||||
if target_face:
|
||||
if target_face and source_face:
|
||||
temp_frame = swap_face(source_face, target_face, temp_frame)
|
||||
else:
|
||||
logging.error("Face detection failed for target or source.")
|
||||
return temp_frame
|
||||
|
||||
|
||||
|
||||
def process_frame_v2(temp_frame: Frame, temp_frame_path: str = "") -> Frame:
|
||||
if is_image(modules.globals.target_path):
|
||||
if modules.globals.many_faces:
|
||||
|
|
|
@ -429,7 +429,7 @@ def create_source_target_popup(
|
|||
POPUP.destroy()
|
||||
select_output_path(start)
|
||||
else:
|
||||
update_pop_status("Atleast 1 source with target is required!")
|
||||
update_pop_status("At least 1 source with target is required!")
|
||||
|
||||
scrollable_frame = ctk.CTkScrollableFrame(
|
||||
POPUP, width=POPUP_SCROLL_WIDTH, height=POPUP_SCROLL_HEIGHT
|
||||
|
@ -489,7 +489,7 @@ def update_popup_source(
|
|||
global source_label_dict
|
||||
|
||||
source_path = ctk.filedialog.askopenfilename(
|
||||
title=_("select an source image"),
|
||||
title=_("select a source image"),
|
||||
initialdir=RECENT_DIRECTORY_SOURCE,
|
||||
filetypes=[img_ft],
|
||||
)
|
||||
|
@ -584,7 +584,7 @@ def select_source_path() -> None:
|
|||
|
||||
PREVIEW.withdraw()
|
||||
source_path = ctk.filedialog.askopenfilename(
|
||||
title=_("select an source image"),
|
||||
title=_("select a source image"),
|
||||
initialdir=RECENT_DIRECTORY_SOURCE,
|
||||
filetypes=[img_ft],
|
||||
)
|
||||
|
@ -627,7 +627,7 @@ def select_target_path() -> None:
|
|||
|
||||
PREVIEW.withdraw()
|
||||
target_path = ctk.filedialog.askopenfilename(
|
||||
title=_("select an target image or video"),
|
||||
title=_("select a target image or video"),
|
||||
initialdir=RECENT_DIRECTORY_TARGET,
|
||||
filetypes=[img_ft, vid_ft],
|
||||
)
|
||||
|
@ -1108,7 +1108,7 @@ def update_webcam_source(
|
|||
global source_label_dict_live
|
||||
|
||||
source_path = ctk.filedialog.askopenfilename(
|
||||
title=_("select an source image"),
|
||||
title=_("select a source image"),
|
||||
initialdir=RECENT_DIRECTORY_SOURCE,
|
||||
filetypes=[img_ft],
|
||||
)
|
||||
|
@ -1160,7 +1160,7 @@ def update_webcam_target(
|
|||
global target_label_dict_live
|
||||
|
||||
target_path = ctk.filedialog.askopenfilename(
|
||||
title=_("select an target image"),
|
||||
title=_("select a target image"),
|
||||
initialdir=RECENT_DIRECTORY_SOURCE,
|
||||
filetypes=[img_ft],
|
||||
)
|
||||
|
|
|
@ -15,11 +15,7 @@ torch==2.5.1; sys_platform == 'darwin'
|
|||
torchvision==0.20.1; sys_platform != 'darwin'
|
||||
torchvision==0.20.1; sys_platform == 'darwin'
|
||||
onnxruntime-silicon==1.16.3; sys_platform == 'darwin' and platform_machine == 'arm64'
|
||||
onnxruntime-gpu==1.16.3; sys_platform != 'darwin'
|
||||
onnxruntime-gpu==1.17; sys_platform != 'darwin'
|
||||
tensorflow; sys_platform != 'darwin'
|
||||
opennsfw2==0.10.2
|
||||
protobuf==4.23.2
|
||||
tqdm==4.66.4
|
||||
gfpgan==1.3.8
|
||||
tkinterdnd2==0.4.2
|
||||
pygrabber==0.2
|
||||
|
|
Loading…
Reference in New Issue