SKYNet Upgrade
Last updated
Last updated
In this litepaper we will outline an important update to Phoenix’s Computation Layer, our Web 3-based platform for computation and AI scaling. We will provide a brief summary of the current state of the Computation Layer, and what the SkyNet update will include, and why is it significant and expected to further accelerate the growth and utility of the platform.
For those familiar with Phoenix Computation Layer (formerly known as Computational L2), it serves as a Web 3-based platform for scaling decentralized AI and MPC (multi-party computation) – whereas the latter is an important unique feature on the platform, this paper will focus on AI computation-scaling. The Computation Layer enables users to easily deploy and scale AI-models via the Computation Layer Control Panel without writing any code and the SDK (software development kit) for developers.
This enables users rapid utility as well as low-cost resources via Phoenix’s AI Node Network, which consists of idle AI-ready cloud computing resources from Phoenix’s Enterprise Partner network (sourced through partners such as APEX Technologies, ByteDance, and Tencent Cloud). We believe in the premise of Web 3 to not only increase cost-efficiency and scalability for AI, but also accessibility, ease of use, and general democratization of the technology.
Phoenix Computation Layer has seen rapid growth in usage stats throughout the past months, seeing a nearly 3x user growth, and nearly 4x increase in AI & Privacy Computation Jobs since March 2023. Additionally, with the recent launch of AlphaNet Public Beta, of which trading market AI models are implemented on the Computation Layer, we will also see the growth of ‘Application Processes’, which are continuous jobs that involve AI model training and updates stemming from various Phoenix dApps. Phoenix is gradually growing an application ecosystem (latest addition being NYBL) that will build on the AI & computation scaling capabilities of the platform.
Phoenix Computation Layer SkyNet upgrade, here forth will be known as “SkyNet”, aims to further increase the utility, scalability, and ecosystem growth factor of the Computation Layer by achieving the following elements:
Compute Resource Decentralization
Complete decentralization of compute resources will enable a myriad of ecosystem participants (miners, enterprises, Phoenix community, gamers, etc) to provide CPU/GPU resources in return for tokenized rewards for an infinitely scalable AI compute platform.
Significant Cost Effectiveness & Scalability
The SkyNet upgrade is expected to have a significant impact on computing costs – up to 80% savings from traditional GPU-enabled cloud computing resources. This will be partially due to our new AI-based resource routing system (explained in more detail below).
GPU Resource Aggregation
The upgrade will enable more effective aggregation and smart allocation of GPU compute resources, which will be key to cost-effectively scaling deep learning and high workload AI models.
Optimized Token Economics
An optimized token economy will arise from the SkyNet upgrade and will involve both PHB and Computation Credits (CCD). CCD will be used to incentivize ecosystem participants (both users and resource providers) as well as synergize with PHB token economy (staking programs, hybrid staking, etc). A separate paper will outline the CCD token economy in detail.
Increased Accessibility of AI
The objective is to not only make AI and resources (ie. GPU computation) much lower cost and scalable, but also to optimize accessibility, ease of use, and time-to-value. This has always been a key tenet that Phoenix Core Development has championed, which is also currently seen in features such as rapid AI model deployment within Computation Layer Control Panel. Additional technical features will be implemented to make it easier for developers to deploy AI models with less code and DevOps (developer operations).
More technical details is to come regarding SkyNet in the upcoming updated 2023 Phoenix Whitepaper Release.
There will be 2 major milestones for SkyNet:
Local Distributed Compute - Each AI job or process can scale within one datacenter or local network. Different jobs will be routed independently to local networks determined by SkyNet’s routing system. Enterprise partners and mining firms will be invited to join the network. [Estimated Initial Launch: Q3 2023]
Global Distributed Compute - Each AI job or process can scale across multiple local networks and geographic locations, and will process asynchronously. SkyNet’s routing system will allocate resources purely based on ultimate balance of resource cost & efficiency, as determined by the core MAPPO model. Upon the launch of Global Distributed Compute, gaming machine owners and Phoenix Community members are welcome to test and join the network. [Estimated Initial Launch: Q4 2023]