Offine-Text-Translate Local Offline Text Translation (OTT)
Supports local offline text translation for multiple languages, providing an API interface.
This project is a re-encapsulation based on the open source project LibreTranslate. The purpose is to provide an easy-to-deploy translation API service directly on a local machine, without the need for Docker. It also provides pre-compiled Windows EXE packages, eliminating the need for deployment. Just double-click to use, making it convenient for novices and beginners.
The first startup requires downloading the model, and subsequent runs can be done offline.
If you want to use the native LibreTranslate project or want to deploy in Docker, please visit https://github.com/LibreTranslate/LibreTranslate
Using the Windows Pre-compiled Package
- If you cannot open the address https://raw.githubusercontent.com, you must set the proxy address in set.ini as
PROXY=
.
You can also download the pre-packaged model from Baidu Netdisk, extract it, and copy the ".local" folder inside to overwrite the root directory of this software. Click here to download from Baidu Netdisk
Click to download the Windows pre-compiled package, extract it to an English directory without spaces, and double-click start.exe.
The model will be downloaded automatically after the first startup. After the download is complete, the current API service address and port will be displayed, and you can start using it. (You can also download the pre-packaged model from Baidu Netdisk, extract it, and copy the ".local" folder inside to overwrite the root directory of this software.)
You can write your own program to request this API service to replace functions such as Baidu Translate, or fill it into software that requires translation functions. For example, if you want to use it in video translation and dubbing software, fill in the server address and port in the software menu - Settings - OTT (default http://127.0.0.1:9911).
Source Code Deployment on Windows
First, download the Python 3.9+ version from python.org and install it, preferably 3.10. When installing, carefully check and select the "Add ... Path" checkbox to facilitate subsequent use.
Install the Git client on Windows, click here to download, select download 64-bit Git for Windows Setup, double-click to install after downloading, proceed to the next step until completion.
Create an empty directory, for example, create a directory ott under the D drive, and then enter the directory
D:/ott
. Entercmd
in the folder address bar and press Enter. In the opened cmd black window, entergit clone https://github.com/jianchang512/ott .
and press Enter to execute.Create a virtual environment. Continue to enter the command
python -m venv venv
in the cmd window just now and press Enter.
Note here: If you are prompted with "python is not an internal or external command, nor is it a runnable program", it means that the checkbox was not selected during installation in step 0. Double-click the downloaded Python installation package again, select "Modify", and then be sure to select "Add ... Path".
After reinstalling Python, you must close the cmd window that has been opened, otherwise you may still be prompted that the command is not found. Then enter
D:/ott
, entercmd
in the address bar and press Enter, and then re-executepython -m venv venv
After the above command is executed successfully, continue to enter
.\venv\scripts\activate
and press Enter, then executepip install -r requirements.txt --no-deps
. If you are prompted with "not found version xxx", please change the mirror source to the official pip or Alibaba Cloud mirror.If you need to enable CUDA acceleration translation, then continue to execute
pip uninstall -y torch
pip install torch==2.1.2 --index-url https://download.pytorch.org/whl/cu121
Set the proxy in set.ini as PROXY=proxy address, for example, if your proxy address is
http://127.0.0.1:10189
, then fill inPROXY=http://127.0.0.1:10189
Execute the start service command,
python start.py