Great video which explains LLM simply, by 3Blue1Brown youtube channel which explains maths using animation.atin math
Moments In Digital Ink
Welcome to Moments In Digital Ink, where everyday experiences are captured and shared through the power of words. Discover heartfelt stories, thought-provoking insights, and creative musings that inspire and entertain. Explore heartfelt stories and inspiring insights at Moments In Digital Ink. This blog captures everyday experiences through engaging narratives and creative musings. Join us on a journey of reflection, inspiration, and connection.
Sunday, October 5, 2025
Wednesday, September 24, 2025
Nvidia Jetson ORIN NANO AI/ML Developer Kit
I finally got my order of the long backordered - Nvidia Jetson ORIN NANO AI/ML Developer Kit. I will be testing and reporting on AI/ML/LLM benchmarks and projects made using this.
I still own the Jetson Nano kit as well.
The NVIDIA Jetson Orin Nano is an edge AI computer available in an 8GB and a 4GB module, featuring an NVIDIA Ampere architecture GPU with 1024 CUDA cores (8GB version) and a 6-core Arm® Cortex®-A78AE CPU. It offers up to 67 INT8 TOPS of AI performance (8GB module), is equipped with 8GB (or 4GB) of LPDDR5 memory, and supports external storage via microSD and NVMe. The device is designed for edge AI applications, offering a balance of performance, power efficiency (7W-15W for the 8GB module), and a compact form factor.
Here's a breakdown of the key specs for the 8GB Jetson Orin Nano module:
GPU
Architecture: NVIDIA Ampere
CUDA Cores: 1024
Tensor Cores: 32
AI Performance: Up to 40 INT8 TOPS (dense) / 67 INT8 TOPS (sparse)
CPU
Cores: 6-core Arm® Cortex®-A78AE v8.2 64-bit
Cache: 1.5MB L2 + 4MB L3
Clock Speed: Up to 1.5 GHz (original) or 1.7 GHz (Super version)
Memory
Capacity: 8GB
Type: 128-bit LPDDR5
Bandwidth: 68 GB/s (original) or 102 GB/s (Super version)
Storage
Supports microSD card for the base OS
Supports external NVMe SSDs via M.2 Key M slot
Power
Power Modes: 7W, 15W (and a new 25W "Super" mode)
Key Features
Size: Compact, ~69.6mm x 45mm
Connectivity: USB ports, GbE, HDMI/DP output, and multiple I/O options
Applications: Ideal for edge AI applications, robotics, smart vision systems, and more
Picture:
Picture - Bootup, and firmware updating. I used a 128Gb MicroSD card and downloaded the latest firmware from this link - click here.
Unboxing video:
Monday, September 15, 2025
Saturday, August 30, 2025
Exploring Googles Nano Banana AI - Gemini 2.5 flash preview
Below are images from my prompts, while exploring Googles Nano Banana AI - gemini 2.5 flash image preview. The prompt ideas were from an Instagram post l saw, l will post the link below.
I took a picture of myself, and entered the first prompt below.
Result
Not bad. Then:
Below took 161.1 seconds and the system crashed.
I reentered the words, this time in quotes. Enugu is the name of the city where l was born.
Result in 227.1 seconds
Then l tried a merge: The image generated wasn't impressive as per the positioning of the can.
I gave my feedback to Gemini/google system on this and decided to rerun the prompt.
In 98.6 seconds l got:
Which looks much more better!
Friday, August 1, 2025
Hugging Face - Upgrade to Xet
I received an informational email on - Hugging face upgrading the Hub's storage backend from Git LFS to Xet. It stated that "Xet is our chunk-based backend that already powers 50% of all Hub downloads and serves the Llama, Qwen, Gemma, and Phi model families."
Xet Documentation link: https://huggingface.co/docs/hub/en/storage-backends#using-xet-storage
Command given:
pip install -U huggingface_hub
Output:
Meet Sohu, the fastest AI chip of all time
It will be interesting to see how far progress is made on this SOHU chip.
Friday, July 25, 2025
Release Notes: Gemini's multimodality
"Ani Baddepudi, Gemini Model Behavior Product Lead, joins host Logan Kilpatrick for a deep dive into Gemini's multimodal capabilities. Their conversation explores why Gemini was built as a natively multimodal model from day one, the future of proactive AI assistants, and how we are moving towards a world where "everything is vision." Learn about the differences between video and image understanding and token representations, higher FPS video sampling, and more."
Large Language Models explained briefly
Great video which explains LLM simply, by 3Blue1Brown youtube channel which explains maths using animation. atin math

-
Welcome to Moments In Digital Ink! This blog is a space where everyday experiences and reflections come to life through the power of words....
-
AI Character Image Generator Links on Fooocus Fooocus: Focus on prompting and generating. Fooocus is an image generating software (based on...
-
From my tests as shown in the screenshots below, the more realistic images generated by AI are all dependent on how detailed and clear the ...