Product Promotion
0x5a.live
for different kinds of informations and explorations.
GitHub - intel-analytics/ipex-llm: Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max); seamlessly integrate with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, GraphRAG, DeepSpeed, vLLM, FastChat, Axolotl, etc.
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, Phi, MiniCPM, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc,...
Visit SiteNo Context is Avilable
Scala Resources
are all listed below.
GitHub - t2v/play2-auth: Play2.x Authentication and Authorization module
resource
~/github.com
resource
GitHub - twitter/finagle: A fault tolerant, protocol-agnostic RPC system
resource
~/github.com
resource
Made with β€οΈ
to provide different kinds of informations and resources.