Written by Don Lariviere
The year just ended proved to be a pivotal year for technological advancement, pushing boundaries and redefining possibilities. From the rise of generative AI to breakthroughs in sustainable energy solutions, the year saw innovation unfold at an unprecedented pace. These advancements promise to reshape our world, impacting how we live, work, and interact with our surroundings.
In fact, in the eight minutes it should take you to read this article, we’re guessing parts of it will already need an update! For now, though, join us as Proxet takes a look at some of the highlights of the past year in technology.
2024: An AI Explosion
Generative AI
Last year saw the surge in popularity of generative AI tools like ChatGPT, Gemini, and Claude, showcasing AI's potential in content creation, coding, and problem-solving. AI is accelerating scientific research with applications in health, communication, infrastructure, and sustainability.
TinyML
In an age of big data, there’s been a rise of “Small Data” and TinyML. Tiny ML refers to the implementation of machine learning models on resource-constrained devices. The focus is shifting toward extracting value from limited data. Instead of relying solely on massive datasets, techniques are being refined to glean insights from smaller, more focused datasets. It’s particularly beneficial in fields with limited data to feed into the ML, like specialized manufacturing.
Text-to-video
RunwayML, Meta, and Google are also leading the charge in text-to-video generation. RunwayML's Gen-2 can now generate high-quality video from text prompts, blurring the lines between reality and simulation. Meta has invested heavily in AI video generation with its Make-A-Video model, and Google’s Imagen Video model is also in the race. Imagen can also analyze and understand the content of videos. This could potentially be used for tasks like generating summaries, answering questions about a video, or even editing videos based on instructions.
Rise in accessibility
AI is also becoming more accessible, with the emergence of no-code AI platforms which made it easier for businesses without deep technical expertise to develop and deploy AI solutions. This allowed smaller companies to build custom AI models without needing to write code, automate tasks using pre-built AI modules, and gain insights from data through user-friendly interfaces.
Palantir’s AIP
Proxet partner Palantir’s AIP is such a platform, gaining significant traction in 2024. Palantir’s AIP is a comprehensive AI solution that lets customers leverage Palantir’s AI and machine learning tools and harness the power of the latest large language models (LLMs) within Foundry and Gotham. Foundry and Gotham target different users (Foundry the private sector and Gotham the government sector), but both excel at gathering and making sense of complex, often sensitive data to understand a situation and improve processes. Customers can deploy LLMs on their own private networks using their own private data, maximizing data security and improving efficiency by helping reduce data transfer and storage costs.
Brain-computer interface
Researchers at Stanford University and the BrainGate consortium developed a brain-computer interface that allows the brain to decode neuronal signals into speech. This breakthrough could transform the lives of patients with severe neurological disorders by restoring their ability to communicate. Other firms, including Neuralink and Synchron, are also experimenting with brain-computer interfaces.
Explainable AI (XAI)
Last year saw increasing emphasis on understanding how AI models arrive at their decisions. XAI techniques helped uncover the reasoning behind AI, making it more trustworthy and accountable, especially in critical applications like healthcare and finance.
Several companies incorporated XAI to improve transparency and trust in AI-driven decision-making:
- Google’s What-If Tool is a visualization tool integrated into TensorFlow to help understand model behavior and potential biases.
- Financial services firm HSBC implemented XAI tools to ensure AI-based credit scoring and fraud detection are transparent and meet regulatory standards.
- Zest AI also uses XAI in a platform that helps lenders understand and explain credit decisions made by their AI models, ensuring fairness and transparency while also improving accuracy and reducing bias.
Data-centric AI is a growing approach to artificial intelligence where the primary focus is on improving the quality, consistency, and completeness of the data used to train models, rather than just optimizing the model architecture itself.
The idea is that better data often leads to better models, especially when working with large datasets for tasks like image recognition, natural language processing, and predictive analytics. It improves model accuracy with smaller datasets, reduces bias, and ensures regulatory compliance in industries like finance and healthcare.
Companies like Tesla rely heavily on data-centric AI by continuously collecting driving data from its fleet of vehicles. Its focus is on refining the data used for its self-driving system, enhancing edge cases and reducing errors in object detection. Johnson & Johnson focuses on high-quality datasets for drug discovery and diagnostics, ensuring datasets are well-structured and labeled to reduce biases in clinical trials and AI-powered diagnostics.
Focus shift towards LLM inference
There’s also a significant change shift in how AI, particularly LLMs, are being developed and used. Traditionally, most computation happens during the "training" phase, where the model learns from massive datasets. Now, there's a growing trend towards shifting more computation to the "test time" or "inference time" – when the model is actually being used to generate responses or make predictions. Instead of just making AI models bigger and training them on more data, researchers are now exploring ways to make them "think" more deeply when answering your questions. They're giving the models more time and resources to process information and generate better responses, even if the models themselves aren't dramatically larger.
Using AI’s Powers For Good
As awareness of the benefits and potential pitfalls of AI grows [link to prior blog], the world is also finding new ways to employ it for social good, like addressing growing global challenges. Data science and AI-powered solutions were increasingly applied to tackle issues like climate change, poverty, and disease outbreaks.
Some examples:
- Drug Discovery: AI played a crucial role in accelerating drug discovery and development at companies such as Pfizer, AstraZeneca, and Novartis, the latter using AI to analyze images of cells and identify potential drug targets. They're also exploring AI for drug repurposing, finding new uses for existing medications.
- Precision agriculture: TinyML sensors were deployed in fields to monitor crop health and optimize irrigation, leading to increased yields and reduced water usage.
- Fairer lending practices: Institutions like Wells Fargo and Capital One focus on responsible AI development, which often includes XAI for explaining credit decisions and mitigating bias.
- Early warning systems for natural disasters: AI models analyzed real-time data from various sources to predict and warn about natural disasters like floods and earthquakes.
Quantum Leaps
Significant progress was made in quantum algorithms and hardware in 2024, bringing us closer to real-world applications across critical fields. Quantum computing is closely related to the AI field but distinct in focus. While both technologies involve advanced computation, quantum computing focuses on leveraging the unique properties of quantum mechanics for complex problem-solving, while AI focuses on algorithms that mimic human intelligence for tasks like pattern recognition, decision-making, and automation.
Advances in error correction, qubit stability, and hybrid quantum-classical approaches have improved the reliability of quantum systems, edging them toward practical use cases. In cybersecurity and cryptography, quantum algorithms are reshaping how we think about data protection, with the potential to break traditional encryption methods while also inspiring the development of quantum-resistant cryptographic standards. In drug discovery, quantum computing is accelerating molecular simulations, potentially revolutionizing pharmaceutical research and personalized medicine.
Sustainability and Resilience
Beyond AI and data (big or small), climate tech innovation focused on addressing climate change, such as carbon capture, renewable energy, and sustainable agriculture, are gaining traction. The year highlighted the need for resilient supply chains, with companies looking to diversify sourcing and reduce reliance on single suppliers.
Several companies have successfully employed AI and big data to improve supply chain resiliency and address climate-related challenges. These technologies are helping businesses to also mitigate disruptions, optimize logistics, and reduce environmental impact.
Some great examples:
- Amazon uses AI for demand forecasting, warehouse automation, and route optimization to ensure supply chain continuity. It also uses big data to predict weather events and geopolitical risks that might impact global logistics.
- Unilever implemented big data analytics to improve supplier visibility and traceability, ensuring ethically sourced raw materials while mitigating climate-related disruption.
- Microsoft (Planetary Computer) employs big data and AI to analyze climate patterns and biodiversity loss. The platform assists companies in making climate-conscious supply chain decisions.
- Shipping giant Maersk uses AI and big data for route optimization to reduce fuel consumption and CO2 emissions in global shipping operations.
Evolving Workforce and Skills
These and other constantly evolving technological advancements require a constantly learning and developing workforce, leading to increased demand for professionals with expertise in AI, cybersecurity, and data science. By developing the right skills and embracing new technologies, employees can thrive in the changing world of work.
Employees in many tech roles — and even roles beyond those with core tech responsibilities — now need to be familiar with AI and machine learning, data analysis, cloud, cybersecurity, and professional skills like collaboration and problem-solving.
In Closing
2024 was a landmark year for technological progress, especially in AI, where innovations in natural language processing, machine learning, and data-centric strategies made a profound impact across industries like healthcare, finance, and sustainability.
The rapid pace of progress has shown us that staying ahead requires more than just keeping up—it demands expertise, strategic vision, and a commitment to continuous learning. That’s where Proxet thrives. Our 300+ talented Proxetters bring deep expertise in data science, AI, cloud computing, product engineering, and managed services, helping businesses turn complex technologies into practical, scalable solutions.
Thank you for being part of the Proxet community. Together, let’s turn today’s breakthroughs into tomorrow’s successes.