
Jianlin Ye
Researcher & Software Engineer
Building innovative solutions at the intersection of AI and software engineering.
About Me
I am a researcher and software engineer with an interdisciplinary background in electrical engineering, specializing in the application of artificial intelligence and computer vision methodologies.
My academic and professional work focuses on developing novel computational approaches to solve complex problems, particularly in UAV technology and autonomous systems.
Through my academic pursuits, I have developed expertise in machine learning frameworks and data analytics, applying these skills to design and optimize algorithms that advance the current state of research.
I am committed to contributing to the academic community through rigorous research methodologies while also bridging the gap between theoretical advancements and their real-world implementations.
Research Interests
- Artificial Intelligence
- Computer Vision
- Machine Learning & Deep Learning
- Natural Language Processing
- Data Analytics & Optimization
Technical Expertise
- Python (PyTorch, TensorFlow)
- Java
- C++
- MLOps & Cloud Computing
- Software Architecture & Development
Latest News
Paper Accepted at ICUAS 2025
Our paper entitled "VLM-RRT: Vision Language Model Guided RRT Search for Autonomous UAV Navigation" has been accepted for presentation in the 2025 International Conference on Unmanned Aircraft Systems (ICUAS 2025), to be held in Charlotte, North Carolina, USA, 2025.
Publications
Jianlin Ye, Stelios Ioannou, Panagiota Nikolaou, Marios Raspopoulos
This paper proposes a system architecture that uses deep learning image processing techniques to automatically identify forest fires in real-time using neural network models for small UAV applications. Considering the strict power and payload constraints of small UAVs, the proposed model runs on a compact, lightweight Raspberry Pi4B (RPi4B) and its performance is comparable to the state-of-the-art metrics (accuracy and real-time response) while achieving significant reduction in CPU usage and power consumption. The proposed YOLOv5 optimization approach used in this paper includes: 1) Replacing the backbone network to ShuffleNetV2, 2) Pruning the Head and Neck network following the backbone baseline, 3) Sparse training to implement the model-pruning method, 4) Fine-tuning of the pruned network to recover the detection accuracy and 5) Hardware acceleration by overclocking the RPi4B to improve the inference speed of the algorithm. Experimental results of the proposed forest fire detection system show that the proposed algorithm compared to the state-of-the-art that run on RPi single board computer, achieves 50% higher inference speed (9 FPS), reduction in CPU usage and temperature by 35% and 25% respectively and 10% reduced power consumption while the accuracy (92.5%) is only compromised by 2%. Finally, it is worth noting that the accuracy of the proposed algorithm is not affected by deviations in the bird-eye view angle.
Honors & Awards
Third Place: Team FinBot - Quadcode HackAIthon
Recognized for excellence in AI solution development during the competitive hackathon.
MSc AI Scholarship
Awarded scholarship for academic excellence in artificial intelligence studies.
First Class Honours Degree
Graduated with First Class Honours (Overall APM: 87.90%).
Best BEng(Hons) Electrical and Electronic Student
Recognized as the top-performing student in the Electrical and Electronic Engineering program.
AWS Certified Cloud Practitioner
Achieved industry certification validating cloud expertise and technical knowledge.
Education
MSc Artificial Intelligence
Thesis
VLM-RRT: Vision Language Model Guided RRT Search for Autonomous UAV Navigation
Presented at ICUAS2025 Conference
Core Modules
Skills
BEng (Hons) in Electrical and Electronic Engineering
Thesis
CNN-based Real-time Forest Fire Detection System for Low Power Embedded Devices
Presented at MED2023 Conference
Core Modules
Skills
Experience
Research Engineer
Responsibilities
- Engaged in cutting-edge research and development of pioneering technology and application in Computer Vision
- Responsible for the implementation and testing of new solutions on UAV (Drones)
- Responsible for the optimization of existing solutions in terms of R&D applications
- Responsible for data collection, annotation, and preparation of dataset for release
- Working on projects involving LLMs, including model fine-tuning and deployment
- Prepare manuals and guidelines
Technologies
Machine Learning Engineer
Responsibilities
- Co-developed a generative AI sanctions screening system automating AML decisions using RAG, fine-tuned SLMs, and Kubernetes, reducing manual effort by 40%
- Engineered an AI agent integrating Elasticsearch RAG and SLMs to boost alert accuracy by 30% while ensuring GDPR/EU AI Act compliance
- Built a compliance dashboard enabling real-time validation of AI decisions, cutting manual reviews by 50%
Technologies
Python Developer
Responsibilities
- Led the automation of call transcription for human operator calls using the MS Azure Batch Transcription API, transforming JSON files into clear XLSX documents
- Optimized performance through an efficient Python script, notably speeding up transcription and file processing, particularly with large volumes of files
Technologies
Web Designer & Developer
Responsibilities
- Use JavaScript & HTML to produce some activities aimed at young children 3-5 during COVID-19 pandemic
Technologies
Projects
Explore my latest projects and research initiatives in AI and software development.

ResearchFlow
ResearchFlow is a powerful multi-agent chat interface that transforms your research process. Manage multiple AI agents in one dynamic conversation to plan, research, and create your research plan and conduct literature reviews more efficiently.
VLM-RRT: Vision Language Model for Path Planning
VLM-RRT is a hybrid approach that integrates the pattern recognition capabilities of Vision Language Models (VLMs) with the path-planning strengths of Rapidly-exploring Random Trees (RRT). By leveraging VLMs to provide initial directional guidance based on environmental snapshots, our method biases sampling toward regions more likely to contain feasible paths, significantly improving sampling efficiency and path quality for autonomous UAVs.
Path planning is a fundamental capability of autonomous Unmanned Aerial Vehicles (UAVs), enabling them to efficiently navigate toward a target region or explore complex environments while avoiding obstacles. Traditional path-planning methods, such as Rapidly-exploring Random Trees (RRT), have proven effective but often encounter significant challenges. These include high search space complexity, suboptimal path quality, and slow convergence, issues that are particularly problematic in high-stakes applications like disaster response, where rapid and efficient planning is critical. To address these limitations and enhance path-planning efficiency, we propose Vision Language Model RRT (VLM-RRT), a hybrid approach that integrates the pattern recognition capabilities of Vision Language Models (VLMs) with the path-planning strengths of RRT. By leveraging VLMs to provide initial directional guidance based on environmental snapshots, our method biases sampling toward regions more likely to contain feasible paths, significantly improving sampling efficiency and path quality. Extensive quantitative and qualitative experiments with various state-of-the-art VLMs demonstrate the effectiveness of this proposed approach.