zhub.link is one of the many independent Mastodon servers you can use to participate in the fediverse.

Administered by:

Server stats:

28
active users

#coding

14 posts12 participants1 post today

Coding is like taking a lump of clay and slowly working it into the thing you want it to become. It is this process, and your intimacy with the medium and the materials you’re shaping, that teaches you about what you’re making – its qualities, tolerances, and limits – even as you make it. You know the least about what you’re making the moment before you actually start making it. That’s when you think you know what you want to make. The process, which is an iterative one, is what leads you towards understanding what you actually want to make, whether you were aware of it or not at the beginning. Design is not merely about solving problems; it’s about discovering what the right problem to solve is and then solving it. Too often we fail not because we didn’t solve a problem well but because we solved the wrong problem.

When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make. Being handed a baked and glazed artefact that approximates what you thought you wanted to make removes the very human element of discovery and learning that’s at the heart of any authentic practice of creation. Where you know everything about the thing you shaped into being from when it was just a lump of clay, you know nothing about the image of the thing you received for your penny from the vending machine.

LLM для кодинга и локальный тест открытых моделей на AMD

LLM кодеры уже показывают отличные результаты на бенчмарках и в реальных задачах. Кажется, сейчас хорошее время, чтобы начать пробовать ими пользоваться. В статье разберем открытые LLM для кодинга. Сравнимы ли они с подписочными моделями? Можно ли их использовать для работы? А есть ли вариант начать локально? В части туториала: 1. Запустим через docker с помощью llama.cpp. 2. Сделаем замеры скорости генерации. 3. Ускорим за счет спекулятивного декодинга. 4. Подключим в vscode, заставим работать локально и через ssh.

habr.com/ru/articles/889310/

ХабрLLM для кодинга и локальный тест открытых моделей на AMDLLM кодеры уже показывают отличные результаты на бенчмарках и в реальных задачах. Кажется, сейчас хорошее время, чтобы начать пробовать ими пользоваться. В статье разберем открытые LLM для кодинга....
Continued thread

⚙️ L'ho provato, e chiaramente è molto performante, e, come sempre lato #coding è tra i migliori. 
💡 Ormai siamo a un punto di convergenza nelle performance: ogni nuovo modello supera leggermente i competitor, fino a un nuovo rilascio di questi ultimi.
🔗 Il post di presentazione: anthropic.com/news/claude-3-7-

___ 

✉️ 𝗦𝗲 𝘃𝘂𝗼𝗶 𝗿𝗶𝗺𝗮𝗻𝗲𝗿𝗲 𝗮𝗴𝗴𝗶𝗼𝗿𝗻𝗮𝘁𝗼/𝗮 𝘀𝘂 𝗾𝘂𝗲𝘀𝘁𝗲 𝘁𝗲𝗺𝗮𝘁𝗶𝗰𝗵𝗲, 𝗶𝘀𝗰𝗿𝗶𝘃𝗶𝘁𝗶 𝗮𝗹𝗹𝗮 𝗺𝗶𝗮 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿: bit.ly/newsletter-alessiopomar 

www.anthropic.comClaude 3.7 Sonnet and Claude CodeToday, we’re announcing Claude 3.7 Sonnet, our most intelligent model to date and the first hybrid reasoning model generally available on the market.