Transport | Logistics | Government | Private Enterprise

Powering Smart Energy & Automation

Enterprise-grade platforms for IoT, Web, and Mobile solutions that scale.

AI-Powered Product Devlopment - Embedded Hardware and Software

Privately Held

AI-Powered Product Devlopment - Embedded Hardware and Software

SoSavvy delivered the hardware and software design behind a privately funded AI Powered Camera project fused embedded vision, AI captioning, and creative generation using AWS Bedrock to produce real-time, personalised poems from captured images. Built as an edge-to-cloud system, it translates visual context into verse across multiple genres, blending art, AI, and human connection in a seamless physical-digital experience.

SoSavvy were tech partners with a private equity group exploring the advances of AI and connection. A think tank was formed to try and alleviate phone-related addiction and stress and re-ignite human connectedness in a unique form.

The result over multiple product design and hardware cycles is a camera using a next-generation IoT imaging platform that captures a subject, sends the image securely to AWS Bedrock's AI models for captioning, and then, based on configured genre preferences, returns a themed poem printed either on physical paper or rendered as digital art. Designed for edge-to-cloud intelligence, the system combines low-latency inference, secured pipelines, and modular poem generation.

Leveraging AWS Bedrock and AWS IoT on demand device provisioning, we built a system that is both scalable and secure, with end-to-end encryption and user privacy at its core. The device supports multiple interaction modes, including instant print, digital display, and app-based sharing, making it versatile for various use cases from personal keepsakes to social sharing.

Architecture & Data Flow

On the device, a high-resolution camera module captures an image which is encoded and compressed, then pushed over TLS-encrypted HTTP/2 to a Lambda-backed API gateway. The image is processed by AWS Bedrock's vision model to derive caption metadata (objects, mood, environment). A poem-generation module, executing either in Bedrock or on a connected containerised endpoint, then selects a verse structure aligned with the captioned theme and genre settings (e.g., haiku, sonnet, free verse).

Printing, Interaction & Mode Control

Based on the selected mode, the device can print the poem instantly via a lightweight thermal or inkjet output, display it on an embedded screen, or send it back to the user's mobile app as a shareable image. Users may switch genres, length, or tone preferences remotely, and the system adapts in real time without re-provisioning. Edge caching ensures local fallback if connectivity is intermittent.

Latency, Scaling & Costs

To maintain responsive behavior, we implement warm model containers, HTTP keep-alive, and regional deployment so round-trip latency for caption + poem generation stays under 300 ms. Poem generation scales horizontally, with autoscaling based on concurrent requests. Cost optimisation ensures minimal compute cycles per request and prioritised batching to reduce overhead in high-volume bursts.

Quality, Personalisation & Feedback Loop

Model outputs are ranked by semantic similarity (caption-poem alignment) and further filtered through style and content validators (toxicity, coherence). The system tracks user feedback ratings to iteratively refine prompt templates and tone preferences. Over time, each camera “learns” the aesthetic patterns and preferred phrasing of its user base—making the poetry feel more personal and contextually rich.