Skip to content
  • Home
  • VODs
  • Gallery
  • Stream Avatar
  • Stream Schedule
  • About Us
Doubling Technologies Live Streams

Doubling Technologies Live Streams

VODs

Aaron Jackson

I primarily use this channel to post videos of projects I build in live streams. I generally stream builds like these on Sundays at 2pm Pacific. Tune in there or check the VODs here after the streams for more builds like this one. https://twitch.tv/yolanother

Aaron Jackson
YouTube Video VVVhNTlLTllscXlOZHktVGNnbTRrOUdnLjk2LWY5blBiYzY4 What happens when you take a masterpiece and turn it into something you can actually feel?

In this experiment, I recreate one of the most famous paintings in history—The Starry Night by Vincent van Gogh—as a textured 3D print. The goal is to capture the swirling brush strokes and depth of the original painting so it can exist as a physical piece of wall art rather than just a flat image.

This is my first attempt at converting public domain art into a printable depth map, and it revealed some interesting challenges. My current models work extremely well for structured relief designs like challenge coins, but paintings introduce a completely different problem: accurately recreating high-resolution brush stroke textures.

The results are promising, but there’s still plenty of room for improvement. Next steps include researching better techniques for extracting brush stroke depth and experimenting with higher fidelity depth maps.

Fun fact: the Museum of Modern Art has produced an incredibly detailed 3D scan of the painting, which would be an amazing reference for recreating the true texture of the original artwork.

The print shown in the video measures roughly 400 mm × 300 mm, which is currently about the largest size my pipeline can process.

This project sits at the intersection of art, AI tooling, and digital fabrication—and it’s just the beginning.

If you enjoy experiments combining technology, art, and making, subscribe for more projects like this.

#3DPrinting #StarryNight #DigitalFabrication #ArtTech #Maker #reliefprinting 


Prelude No. 9 by Chris Zabriskie is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/by/4.0/
What happens when you take a masterpiece and turn it into something you can actually feel?

In this experiment, I recreate one of the most famous paintings in history—The Starry Night by Vincent van Gogh—as a textured 3D print. The goal is to capture the swirling brush strokes and depth of the original painting so it can exist as a physical piece of wall art rather than just a flat image.

This is my first attempt at converting public domain art into a printable depth map, and it revealed some interesting challenges. My current models work extremely well for structured relief designs like challenge coins, but paintings introduce a completely different problem: accurately recreating high-resolution brush stroke textures.

The results are promising, but there’s still plenty of room for improvement. Next steps include researching better techniques for extracting brush stroke depth and experimenting with higher fidelity depth maps.

Fun fact: the Museum of Modern Art has produced an incredibly detailed 3D scan of the painting, which would be an amazing reference for recreating the true texture of the original artwork.

The print shown in the video measures roughly 400 mm × 300 mm, which is currently about the largest size my pipeline can process.

This project sits at the intersection of art, AI tooling, and digital fabrication—and it’s just the beginning.

If you enjoy experiments combining technology, art, and making, subscribe for more projects like this.

#3DPrinting #StarryNight #DigitalFabrication #ArtTech #Maker #reliefprinting 


Prelude No. 9 by Chris Zabriskie is licensed under a Creative Commons Attribution 4.0 license. https://creativecommons.org/licenses/by/4.0/
Recreating the Masters | Turning Starry Night into a 3D Print
Lately I’ve been building a real-time conversational Discord NPC focused on natural voice interaction in group settings.

Voice is easy to demo in a one-on-one scenario. It’s much harder when you introduce multiple speakers, interruptions, overlapping speech, latency, and messy real-world audio. That’s where things get interesting.

This bot isn’t just a feature — it’s a test engine. It lets me see every phase of the pipeline: audio capture, transcription, turn detection, inference, and TTS. That visibility helps me identify friction points and improve the reusable voice components I’m building across several projects.

Under the hood, I’m using high-quality TTS from ElevenLabs, local ASR with Whisper, and fast inference with Cerebras. That stack lets me iterate quickly and focus on solving the actual conversation problems.
Testing Nova a Discord Voice Chat Bot
I wanted to build a couple new experiences with NPCs this week and realized I wanted a completely fresh start. The result is Synapse, a natural interaction, llm, and interactive NPC library built from the ground up. This is a look at the week's work so far. Synapse is less than a week old and we can already start to talk to NPCs. This is a demo showing off push to talk. There's loads of work that needs to be done including improving lipsync, but you can have a reasonable conversation with them!
Synapse First Demo PTT First Signs of Life
Flying around and diving a bit at the field.
Flying the Pico II at the SRAC Field
Chasing a model jet with @ShadowVFX
In the rapidly evolving landscape of software development, mastering the art of prompting AI coding assistants is becoming an essential skill for developers. These innovative tools, often referred to as "vibe coding" platforms like Cloud Code and Root Code, are transforming how code is written and optimized. By crafting smart, targeted prompts, developers can significantly enhance the output of these AI assistants, leading to improved coding results and cost management. This approach not only augments the development process but also ensures that human creativity and critical thinking remain integral. Exploring advanced features such as specialized agents and memory systems can further refine workflows, offering a competitive edge in a field where AI's influence is expanding rapidly. As these technologies continue to advance, they raise intriguing questions about the evolving roles of human and AI contributions in software development. Embracing these tools and techniques is crucial for staying ahead in this dynamic environment.
Unleashing the Power of AI in Software Development & Refactoring
In the ever-evolving world of gaming, the quest to create non-playable characters (NPCs) with authentic personalities is gaining momentum, driven by innovative AI research. This exploration delves into the cutting-edge strategies employed by scientists to infuse digital characters with a semblance of an inner life, thereby enhancing their conversational and interactive capabilities. By leveraging psychological models and advanced data techniques, researchers are crafting NPCs that transcend their traditional robotic nature, aiming to make them more engaging and lifelike. This endeavor not only highlights the current advancements but also underscores the challenges ahead in achieving truly human-like digital interactions. As players navigate virtual worlds, they are encouraged to reflect on the nuances of NPC personalities—whether it's their consistency, emotional depth, or backstory—that contribute to a more immersive gaming experience. This ongoing research promises to redefine the dynamics of player-NPC interactions, potentially revolutionizing the gaming landscape.
Unveiling the Psychology of Chatbots: A Comprehensive Survey
Fine-tuning generative AI models is an exciting frontier in technology, offering the ability to customize powerful AI systems to meet specific needs. This process can be likened to tailoring a pre-made suit to fit perfectly, enhancing the AI's capabilities for specialized tasks. One of the most compelling applications is in creating highly personalized 3D avatars. By fine-tuning AI, developers can generate avatars that reflect unique styles, specific features, and even emotions, opening up a world of personalization for digital identities and applications. The discussion highlights efficiency techniques such as LoRa and tools like Azure that streamline the fine-tuning process, making it more accessible and less daunting. As the potential for creating next-level avatars becomes more tangible, the possibilities for personalization are endless. This exploration encourages readers to consider what characteristics and artistic styles they would prioritize in their ideal 3D avatars, inviting them to delve deeper into the transformative world of generative AI.
Mastering Generative AI: Fine-Tuning Secrets Revealed
Speech recognition technology has become an integral part of our daily interactions, often operating behind the scenes to transform spoken words into text. This intricate process involves two primary stages: acoustic processing, which converts sound waves into digital features, and linguistic decoding, where these features are matched with a dictionary and grammar rules to make sense of the input. The effectiveness of speech recognition is measured using metrics like Word Error Rate (WER), though these are not without limitations. Challenges such as varying accents and background noise are significant, but advancements like data augmentation and new architectures, such as Mamba and models like Samba ASR, are paving the way for more robust solutions. As this technology evolves, it raises important questions about balancing accuracy, privacy, and accessibility. Looking ahead, the potential for new applications and seamless voice interfaces offers exciting possibilities for how we interact with technology in the future.
Decoding the Future: Exploring Speech Recognition Technology
Load More... Subscribe

 

 

  • twitch
  • youtube
Built with BoldGridPowered By DreamHostSpecial Thanks