What's New in Our Catalog

Newest Releases

GPXX0LVJEN bannerAI Code Review Showdown: Anthropic's Claude vs IBM's Granite | View in the Skills Network Catalog

Compare Anthropic's Claude 3.7 Sonnet and IBM's Granite 3.2 8B Instruct models for Python code review tasks in both reasoning and non-reasoning modes. This lab evaluates how these hybrid reasoning models perform when analyzing syntax errors, algorithms, authentication systems, and architecture. Discover which model delivers better accuracy, speed, and cost-efficiency for different coding scenarios. Learn when reasoning mode provides advantages over non-reasoning mode and identify optimal use cases for each approach based on comprehensive performance metrics.

Updates

SE0101EN bannerFundamentals of Encryption & Quantum-Safe Techniques | View in the Skills Network Catalog

In this short course, learn about encryption, quantum computers, and quantum-safe cryptography to secure your IT infrastructure against emerging threats. Understand types of encryption, key management and consequences when encryption is cracked. Prepare for the quantum era and become ready to safeguard your data with IBM Quantum Safe technology.

AI0149EN bannerIntroduction to Agentic AI Tools | View in the Skills Network Catalog

Talk to your data instead of coding SQL queries. Imagine asking a database “Who spent the most last year?” and getting an instant answer. In this course, you’ll build two working AI agents that make this possible: one that turns plain English into SQL queries for real databases, and another that lets you chat with pandas DataFrames using LangChain. Along the way, you’ll gain hands-on practice with agentic tools, seeing how they reason about data, generate queries, and make exploration intuitive.

SC0105EN bannerData Science with Scala | View in the Skills Network Catalog

Apache Spark™ is a fast and general engine for large-scale data processing, with built-in modules for streaming, SQL, machine learning and graph processing. This course shows how to use Spark’s machine learning pipelines to fit models and search for optimal hyperparameters using a Spark cluster.