Text Generation
Transformers
PyTorch
English
experimental
research
bit-level
transformer
reversible
safety
telemetry
language-modeling
Instructions to use WCNegentropy/BitTransformerLM with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use WCNegentropy/BitTransformerLM with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="WCNegentropy/BitTransformerLM")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("WCNegentropy/BitTransformerLM", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use WCNegentropy/BitTransformerLM with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "WCNegentropy/BitTransformerLM" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "WCNegentropy/BitTransformerLM", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/WCNegentropy/BitTransformerLM
- SGLang
How to use WCNegentropy/BitTransformerLM with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "WCNegentropy/BitTransformerLM" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "WCNegentropy/BitTransformerLM", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "WCNegentropy/BitTransformerLM" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "WCNegentropy/BitTransformerLM", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use WCNegentropy/BitTransformerLM with Docker Model Runner:
docker model run hf.co/WCNegentropy/BitTransformerLM
| import argparse | |
| import subprocess | |
| import sys | |
| import time | |
| from pathlib import Path | |
| from watchdog.events import FileSystemEventHandler | |
| from watchdog.observers import Observer | |
| class RestartOnChange(FileSystemEventHandler): | |
| """Restart a subprocess when watched files change.""" | |
| def __init__(self, command: list[str], watch_paths: list[str]) -> None: | |
| self.command = command | |
| self.watch_paths = [Path(p).resolve() for p in watch_paths] | |
| self.process: subprocess.Popen | None = None | |
| self.restart() | |
| def restart(self) -> None: | |
| if self.process and self.process.poll() is None: | |
| self.process.terminate() | |
| try: | |
| self.process.wait(timeout=5) | |
| except subprocess.TimeoutExpired: | |
| self.process.kill() | |
| self.process.wait() | |
| self.process = subprocess.Popen(self.command) | |
| def on_any_event(self, event) -> None: # pragma: no cover - runtime utility | |
| if event.is_directory: | |
| return | |
| path = Path(event.src_path) | |
| if path.suffix != ".py": | |
| return | |
| if any(str(path).startswith(str(p)) for p in self.watch_paths): | |
| print(f"[watcher] {path} changed, running tests...") | |
| subprocess.run([sys.executable, "-m", "pytest", "-q"]) | |
| print("[watcher] restarting process...") | |
| self.restart() | |
| def main() -> None: # pragma: no cover - CLI entry | |
| parser = argparse.ArgumentParser( | |
| description="Watch files and restart a command on changes", | |
| ) | |
| parser.add_argument( | |
| "--command", | |
| nargs="+", | |
| default=[sys.executable, "mcp_server.py"], | |
| help="Command to run", | |
| ) | |
| parser.add_argument( | |
| "--paths", | |
| nargs="+", | |
| default=["bit_transformer", "mcp_server.py"], | |
| help="Paths to watch for changes", | |
| ) | |
| args = parser.parse_args() | |
| observer = Observer() | |
| handler = RestartOnChange(args.command, args.paths) | |
| for p in args.paths: | |
| observer.schedule(handler, p, recursive=True) | |
| observer.start() | |
| try: | |
| while True: | |
| time.sleep(1) | |
| except KeyboardInterrupt: | |
| pass | |
| finally: | |
| observer.stop() | |
| handler.restart() | |
| if handler.process and handler.process.poll() is None: | |
| handler.process.terminate() | |
| handler.process.wait() | |
| observer.join() | |
| if __name__ == "__main__": # pragma: no cover - CLI entry | |
| main() | |