

Researcher | Programmer | Entrepreneur
I design & build
I design & build
I design & build AI
Hi, I'm Sam Tukra.
Chief AI Officer (CAIO) of Applied Computing
& Leads Research at Imperial College London.
Hi, I'm Sam Tukra.
Chief AI Officer (CAIO) of Applied Computing & Leads Research at Imperial College London.
I’m a researcher, programmer, and entrepreneur working at the frontier of AI and heavy industry. I’ve spent the last 10 years building intelligent systems; leading full-stack ML teams from R&D to production across academia, energy industry, and startups.
Today, I’m the Chief AI Officer & Co-Founder at Applied Computing, where we’ve built Orbital. A multi agent physics grounded co-pilot to optimise large industrial process operations, starting with oil & gas refineries. My work bridges deep learning, physics-based modelling, multi-agent systems and reinforcement learning to deliver AI that’s not just smart, but operational as a product.
I’ve published in top conferences like CVPR and journals like IEEE-PAMI, and I remain passionate about turning cutting-edge research into impactful, deployable products and teaching.
Whether you’re a student, technologist, or energy professional, let’s connect.
2023 - Present
Applied Computing
Chief AI Officer & Co-Founder
Researching, building, and deploying AI systems operating in live industrial environments.
Designed and deployed Orbital, a physics-grounded AI system operating on live refinery and process data.
Owned AI architecture, research direction, engineering execution, and production delivery, from first principles to deployment.
Built and led a 16-person cross-functional team across AI research, engineering, product, and infrastructure.
Focused on closing the gap between research models and production systems, operating under real-world constraints: noisy data, hard latency budgets, and operational risk.
2022 - 2024

Shell
Senior Machine Learning Research Scientist
Research and deployment of computer vision systems in live industrial environments.
Designed and deployed end-to-end computer vision systems for automated inspection in refinery environments, spanning model development through production integration.
Built multi-model inference pipelines (detection, segmentation, OCR) running jointly in real time on resource-constrained edge hardware (NVIDIA Jetson), with strict latency and reliability requirements.
Developed large-scale 3D reconstruction pipelines using NeRFs and Gaussian Splatting, reconstructing complex industrial sites from high-resolution (8K) imagery.
Architected Shell’s first deep-learning R&D platform, including distributed training, evaluation, and deployment pipelines, with active-learning and drift-management loops.
Conducted applied research in generative and diffusion models, embedding strong physical and geometric priors into vision systems for improved robustness in industrial settings.
2021 - 2022

Tractable
Senior Machine Learning Researcher
Multi-modal learning and representation learning at scale.
Built the company’s core multi-modal research framework, enabling scalable experimentation across vision and language models on GPU/TPU infrastructure.
Designed and trained vision–language models combining ViT-based visual encoders with language models for structured damage understanding.
Proposed a self-supervised masked image modelling approach to improve visual representation learning under limited labels, yielding stronger downstream perception performance.
Research accepted at CVPR 2023; invited to present the work at the University of Oxford (Torr Vision Group).
2021 - 2022
Hitachi
Machine Learning Engineer
Applied perception and simulation techniques to autonomous and smart-city systems.
Developed real-time perception pipelines for autonomous driving and crowd monitoring, combining object detection, multi-object tracking, and depth estimation.
Built large-scale simulation environments in Unity to generate synthetic training data, addressing data scarcity and edge-case coverage for vision models.
Integrated and optimised multi-model inference pipelines for real-time deployment under latency and hardware constraints.
Explored privacy-preserving vision approaches using generative models to obfuscate identity-sensitive visual content while retaining task-relevant structure.
2019 - 2022

Third Eye Intelligence
Founder & CEO
Early warning systems for healthcare using multi-modal AI.
Designed and trained a multi-modal autoregressive model predicting organ failure up to 48 hours before onset from ICU time-series data.
Translated the predictive models into a clinician-facing software product, embedding inference, alerting, and interpretability into ICU workflows.
Validated the product across 3 NHS Trusts (8 hospitals) in live clinical workflows, supporting earlier intervention in critical care.
Raised £125k seed (non-equity); awarded Best Healthcare Startup by the Imperial Institute of Global Health & Innovation.
Concluded the commercial effort and transitioned the technology into Imperial College London’s research programme
2023 - Present
Applied Computing
Chief AI Officer & Co-Founder
Researching, building, and deploying AI systems operating in live industrial environments.
Designed and deployed Orbital, a physics-grounded AI system operating on live refinery and process data.
Owned AI architecture, research direction, engineering execution, and production delivery, from first principles to deployment.
Built and led a 16-person cross-functional team across AI research, engineering, product, and infrastructure.
Focused on closing the gap between research models and production systems, operating under real-world constraints: noisy data, hard latency budgets, and operational risk.
2022 - 2024

Shell
Senior Machine Learning Research Scientist
Research and deployment of computer vision systems in live industrial environments.
Designed and deployed end-to-end computer vision systems for automated inspection in refinery environments, spanning model development through production integration.
Built multi-model inference pipelines (detection, segmentation, OCR) running jointly in real time on resource-constrained edge hardware (NVIDIA Jetson), with strict latency and reliability requirements.
Developed large-scale 3D reconstruction pipelines using NeRFs and Gaussian Splatting, reconstructing complex industrial sites from high-resolution (8K) imagery.
Architected Shell’s first deep-learning R&D platform, including distributed training, evaluation, and deployment pipelines, with active-learning and drift-management loops.
Conducted applied research in generative and diffusion models, embedding strong physical and geometric priors into vision systems for improved robustness in industrial settings.
2021 - 2022

Tractable
Senior Machine Learning Researcher
Multi-modal learning and representation learning at scale.
Built the company’s core multi-modal research framework, enabling scalable experimentation across vision and language models on GPU/TPU infrastructure.
Designed and trained vision–language models combining ViT-based visual encoders with language models for structured damage understanding.
Proposed a self-supervised masked image modelling approach to improve visual representation learning under limited labels, yielding stronger downstream perception performance.
Research accepted at CVPR 2023; invited to present the work at the University of Oxford (Torr Vision Group).
2021 - 2022
Hitachi
Machine Learning Engineer
Applied perception and simulation techniques to autonomous and smart-city systems.
Developed real-time perception pipelines for autonomous driving and crowd monitoring, combining object detection, multi-object tracking, and depth estimation.
Built large-scale simulation environments in Unity to generate synthetic training data, addressing data scarcity and edge-case coverage for vision models.
Integrated and optimised multi-model inference pipelines for real-time deployment under latency and hardware constraints.
Explored privacy-preserving vision approaches using generative models to obfuscate identity-sensitive visual content while retaining task-relevant structure.
2019 - 2022

Third Eye Intelligence
Founder & CEO
Early warning systems for healthcare using multi-modal AI.
Designed and trained a multi-modal autoregressive model predicting organ failure up to 48 hours before onset from ICU time-series data.
Translated the predictive models into a clinician-facing software product, embedding inference, alerting, and interpretability into ICU workflows.
Validated the product across 3 NHS Trusts (8 hospitals) in live clinical workflows, supporting earlier intervention in critical care.
Raised £125k seed (non-equity); awarded Best Healthcare Startup by the Imperial Institute of Global Health & Innovation.
Concluded the commercial effort and transitioned the technology into Imperial College London’s research programme
2023 - Present
Applied Computing
Chief AI Officer & Co-Founder
Researching, building, and deploying AI systems operating in live industrial environments.
Designed and deployed Orbital, a physics-grounded AI system operating on live refinery and process data.
Owned AI architecture, research direction, engineering execution, and production delivery, from first principles to deployment.
Built and led a 16-person cross-functional team across AI research, engineering, product, and infrastructure.
Focused on closing the gap between research models and production systems, operating under real-world constraints: noisy data, hard latency budgets, and operational risk.
2022 - 2024

Shell
Senior Machine Learning Research Scientist
Research and deployment of computer vision systems in live industrial environments.
Designed and deployed end-to-end computer vision systems for automated inspection in refinery environments, spanning model development through production integration.
Built multi-model inference pipelines (detection, segmentation, OCR) running jointly in real time on resource-constrained edge hardware (NVIDIA Jetson), with strict latency and reliability requirements.
Developed large-scale 3D reconstruction pipelines using NeRFs and Gaussian Splatting, reconstructing complex industrial sites from high-resolution (8K) imagery.
Architected Shell’s first deep-learning R&D platform, including distributed training, evaluation, and deployment pipelines, with active-learning and drift-management loops.
Conducted applied research in generative and diffusion models, embedding strong physical and geometric priors into vision systems for improved robustness in industrial settings.
2021 - 2022

Tractable
Senior Machine Learning Researcher
Multi-modal learning and representation learning at scale.
Built the company’s core multi-modal research framework, enabling scalable experimentation across vision and language models on GPU/TPU infrastructure.
Designed and trained vision–language models combining ViT-based visual encoders with language models for structured damage understanding.
Proposed a self-supervised masked image modelling approach to improve visual representation learning under limited labels, yielding stronger downstream perception performance.
Research accepted at CVPR 2023; invited to present the work at the University of Oxford (Torr Vision Group).
2021 - 2022
Hitachi
Machine Learning Engineer
Applied perception and simulation techniques to autonomous and smart-city systems.
Developed real-time perception pipelines for autonomous driving and crowd monitoring, combining object detection, multi-object tracking, and depth estimation.
Built large-scale simulation environments in Unity to generate synthetic training data, addressing data scarcity and edge-case coverage for vision models.
Integrated and optimised multi-model inference pipelines for real-time deployment under latency and hardware constraints.
Explored privacy-preserving vision approaches using generative models to obfuscate identity-sensitive visual content while retaining task-relevant structure.
2019 - 2022

Third Eye Intelligence
Founder & CEO
Early warning systems for healthcare using multi-modal AI.
Designed and trained a multi-modal autoregressive model predicting organ failure up to 48 hours before onset from ICU time-series data.
Translated the predictive models into a clinician-facing software product, embedding inference, alerting, and interpretability into ICU workflows.
Validated the product across 3 NHS Trusts (8 hospitals) in live clinical workflows, supporting earlier intervention in critical care.
Raised £125k seed (non-equity); awarded Best Healthcare Startup by the Imperial Institute of Global Health & Innovation.
Concluded the commercial effort and transitioned the technology into Imperial College London’s research programme