Platform Introduction
Welcome to RunC.AI. We are committed to providing clients with a variety of computing power resources. We support short-term rentals by the day or by the hour, as well as long-term monthly rentals, meeting the needs of various clients for short-term tasks or long-term stable projects. With one-step configuration of computing resources, storage and the network, you can enjoy efficient cloud-based programming and training.
What RunC.AI can offer:
- There are two types of instance types: virtual machines and Docker containers. You can choose according to your needs.
- Provides nodes in multiple global regions to deploy GPUs closer to your users for minimal latency;
- It provides a rich variety of model images, such as LLamaFactory, SD-webUI, ComfyUI, etc. By taking advantage of the fast startup feature of containers, one-click deployment can be achieved.
- For virtual machine instances, images like Ubuntu and Windows are provided.
- Supports multiple billing cycles, with billing accurate to the second. On-demand charges are only charged for the actual running time of the GPU container instance.
- It provides out-of-the-box automatic SSH and Jupyter settings.
- Object Storage enables the interaction between all instances. By mounting the Object Storage to multiple instances, it can save costs on disk space.
- Unified endpoints to generate text, images, and code from web or API.
Register an account now (opens in a new tab) and get started!
Terminology
- POD: GPUs in containers are directly accessed by apps with near-full performance. Used for AI and cloud-native tasks.
- VM: GPUs in VMs use virtualization tech (like vGPU) with slight performance loss but better isolation.
- Image: An image is a lightweight, standalone package that includes everything needed to run an application: code, runtime, libraries, and settings. Container/POD instances are created from images.
- HTTP/TCP Ports: Network endpoints exposed by containers for communication. HTTP ports (e.g., 80, 443) for web services, TCP ports for general network protocols.
- Container Disk: Root filesystem partition containing OS and system binaries. Mounted as read-write at container startup.
- Volume Disk: High-performance ephemeral storage directly attached to the GPU instance. Optimized for I/O intensive workloads but non-persistent across instance restarts.
- Network Volume: Network-attached storage service that allows users to access and manage data through a network. Data can be stored on remote servers and accessed from any device.
- AI Model Endpoints: Offer a unified API and intuitive web interface for calling diverse AI models. Users can generate text, images, and code directly from the platform’s front-end or integrate these capabilities into their applications, ensuring efficient deployment, scalable performance, and continuous access to the latest models.
Contact Us
If you have any questions or suggestions, please send an email to contact@runc.ai,We aim to reply within hours to ensure best support.