Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Vitalik shares local private LLM solution, emphasizing privacy and security first
ME News update, April 2 (UTC+8). Vitalik Buterin posted sharing his local and private LLM deployment方案 as of April 2026. The core goal is to treat privacy, security, and self-sovereign control as prerequisites, minimize the opportunities for personal data to be accessed by remote models and external services, and reduce the risks of data leakage, model jailbreaks, and malicious-content exploitation through local inference, local file storage, and sandbox isolation. In terms of hardware, he tested options including a laptop equipped with an NVIDIA 5090 GPU, an AMD Ryzen AI Max Pro 128 GB unified-memory device, and DGX Spark, and performed local inference using the Qwen3.5 35B and 122B models. Among them, the 5090 laptop reaches about 90 tokens/s with the 35B model, the AMD setup is about 51 tokens/s, and DGX Spark is about 60 tokens/s. Vitalik said he is more inclined to build a local AI environment based on high-performance laptops, while using tools such as llama-server, llama-swap, and NixOS to set up the overall workflow. (Source: ODAILY)