Friday, June 13, 2025

The Autonomous Vehicle System iDAR Mimics Human Eyes

For autonomous driving to be safe, engineers need to design systems and sensors that can detect, interpret and react to hazards on the road.

That is why AEye is using simulation to create an Intelligent Detection and Ranging (iDARTM) platform that mimics how human eyes focus on the road.

autonomous-vehicle-mimic-eyes-focus.jpg 
iDAR from AEye will mimic how human eyes focus on the road.

The company will be using Ansys Speos to model the optics of its sensor platform and Ansys VRXPERIENCE to test and validate it within a realistic virtual environment.

autonomous-vehicle-mimic-eyes-ces.jpg 
Visit ANSYS at CES

The iDAR Platform Helps Autonomous Systems Assess the Road

Traditionally, engineers would have to test and validate their sensor systems using physical prototypes that consume a lot of time and budgets.

However, with the iDAR platform, a sensor system can be tested virtually over millions of miles in a few days.

Thanks to this autonomy on-demand, AEye and its OEM and Tier 1 customers will be able to address use cases systematically and gain more intelligence from the edge.

autonomous-vehicle-mimic-eyes-edge.jpg
iDAR will be tested on virtual roads and edge cases.

This systematic approach is important because you can’t put an autonomous system on the road and expect to test it through every potential scenario it could experience. The road is an unpredictable place so it’s impossible to tell when an edge case could pop up. Setting up these scenarios using physical prototypes could be impossible, expensive and dangerous.

As customer adopt iDAR, Ansys pervasive simulation software will be there to fully validate its performance. This will help to reduce development time and optimize autonomous implementation.

How to Apply Rocky DEM to Generate More Accurate Structural Analysis

Imagine that you’re designing a bucket conveyor. How would you approach such a project?

Historically, engineers started with a known design, ran hand calculations, made assumptions and performed field tests. These designs likely failed their first trial. So, the engineers iterated and tried again.

 rocky-dem-structural-analysis-export.jpg 

Exporting loads from Rocky DEM for static structural simulations

Physical prototyping involves a lot of time, cost and effort. As a result, it’s not conducive to competitive product launch cycles. So, engineers have been adopting high-fidelity simulation tools (like finite element analysis [FEM], computational fluid dynamics [CFD] and discrete element method [DEM]) to design products.

To design that bucket conveyor, engineers could couple Ansys Mechanical and Rocky DEM to simulate and optimize it virtually. 

FEM and DEM Basics

Engineers use FEA software, like Mechanical, to perform structural simulations for the civil, automotive, aviation and other sectors.

 

Rocky DEM calculates the loads on a bucket excavator from the bulk material it is moving.

Static simulations solve for the equilibrium conditions and deformations of a structure under specified loads. For transient simulations, the equilibrium conditions account for both deformation and kinematic energies.

DEM is an integral tool to study particle dynamics. It handles bulk material like rocks, soil, powdered chemicals, food chips and pharmaceutical tablets.

DEM accounts for all of the forces acting on each particle within a bulk system. It then provides insight into how these materials would perform within a given component over a range of process conditions.

Rocky DEM can simulate systems with many particles that have complex shapes and accurate sizes. The tool is used across multiple industrial sectors, including:

  • Mining
  • Heavy machinery
  • Agriculture
  • Chemical
  • Pharmaceutical 

Coupling Ansys Workbench with Rocky DEM

During simulation, Rocky DEM tracks the loads on each node of a geometry mesh. These loads are then exported as a pressure field for further analysis with Mechanical. The FEA software then discretizes the geometry and solves for the equilibrium conditions.

By coupling structural analysis with Rocky DEM, engineers can simulate transient cases while incorporating geometry motion and time-varying loads on boundary elements.

rocky-dem-structural-analysis-ansys-workbench.jpg

Rocky DEM is fully integrated into Ansys Workbench.

In addition, Rocky DEM is fully integrated into Ansys Workbench so it doesn’t require external software to couple it with Mechanical. This also enables engineers to easily apply design exploration tools for virtual parametric studies, optimizations, robustness analyses and response surface generation.

Rocky DEM can replicate complex motions within its UI, including combined motion and particle-induced free body motion with 6 degrees of freedom.
 

Solving Real-Life Problems

Many clients have coupled Ansys software with Rocky DEM to improve their equipment and processes.

For example, one of the largest producers of iron ore faced poor production efficiency whenever crushed ore jammed the moving screens at the base of its hoppers. This increased maintenance and downtime to clean the equipment.

rocky-dem-structural-analysis-doe-rsm.jpg 
A design of experiments within ANSYS Workbench generates a response surface.

Using Mechanical and Rocky DEM, the company accurately characterized the screen loads following their regular processes. Rocky DEM captured the broad size and shape distribution of the incoming ore.

This enabled the company to implement effective design changes to the equipment, such as optimizing the tilt angle, rotation speed, distance and profile of the roller disks. After these changes, production increased 11.4%, which saved the company $100 million in just over 3 months.

Digital Image Correlation: A Key Technique for Materials Characterization

Digital image correlation (DIC) is a non-contact, full-field displacement, optical measurement technique. It is often used in the following applications: 
  • Materials characterization
    • Coefficient of thermal expansion (CTE)
    • Glass transition temperature
    • Young’s modulus
    • Poisson’s ratio
  • Sample testing for fatigue and failure
    • In situ monitoring of displacements and strains
  • Displacement or deformation measurements
  • High speed/frequency scenarios
    • Crash testing, vibration  

digital-image-correlation-diagram.jpg 

Diagram of a ball grid array (BGA). Engineers can use digital image correlation (DIC) to assess its thermal expansion or warpage due to thermal, mechanical and thermo-mechanical loads.

DIC is an important tool to capture an electronic component’s response to simulated thermal, thermo-mechanical and mechanical loads. One of the best examples of the value of DIC is its ability to measure the CTE and warpage of ball grid array (BGA) devices.

How Thermo-Mechanical Loads Affect Ball Grid Arrays

A BGA is a complex semiconductor package consisting of multiple elements, including:

  • One or more silicon dies
  • A silica-filled epoxy encapsulant
  • A layered composite of copper and glass fiber-reinforced epoxy
  • Hundreds to thousands of solder balls

This complicated architecture, while necessary to meet performance and cost targets, can result in thermal expansion behaviors that can cause manufacturing defects and failure in the field.

digital-image-correlation-warpage.jpg 
Warpage results from a DIC

When the BGA is soldered to a printed circuit board (PCB) it can warp during reflow. This could result in solder defects, such as head-in-pillow (HiP), that can reduce first-pass yield and increase warranty issues.

While in operation, BGA power dissipation can heat up the package. If the BGA and PCB have different CTEs, the solder balls could experience stress that eventually results in fatigue, crack propagation and failures.

To help detect and prevent these issues, engineers use DIC because it is difficult to estimate warpage and CTE of these complex systems using other methods.

How to Perform a Digital Image Correlation of a Ball Grid Array

To better understand DIC, here is an example study engineers can use to learn how to measure the CTE and warpage of a dummy BGA.

digital-image-correlation-dic-process.jpg 
To learn how to perform a DIC, consider a BGA with a speckled pattern and its solder balls removed.

First, the engineers prepare the sample for DIC by removing its solder balls with a soldering wick.

This is done because certain parts (like large lidded components and quad flat no-leads [QFN] overmolds) must be deconstructed and analyzed piece-by-piece.

Once the solder balls are removed, engineers speckle the part. This is done manually and requires a lot of practice. It’s important to make sure the base coat isn’t thick as this can throw off the readings. The speckles also need to be an appropriate size for the focal depth of the DIC cameras.

Engineers then put the speckled BGA into the camera chamber, which tracks how far apart the speckles move between different images, as the temperature changes. Engineers can use this information from the whole sample to estimate the CTE.

digital-image-correlation-displacement.jpg 
Plane displacement results from a DIC

How to Process the Data from a Digital Image Correlation of a BGA

To assess the results from the DIC, engineers need to plot the BGA’s average strain against temperature.

digital-image-correlation-strain.jpg 
A chart assessing the average strain versus temperature. The slope of 10 ppm/degrees Celsius (5.4 ppm/degrees Fahrenheit) is equal to the CTE.

If all goes well, a linear function can be fit to this data. In this case, the slope will represent the CTE.

In an in-house example, engineers found that the slope varied slightly over the temperature range. However, it can be stated with some accuracy, that the CTE is about 10 ppm/degrees Celsius (between temperatures of 20 and 150 degrees Celsius), or 5.4 ppm/degrees Fahrenheit (between 68 and 302 degrees Fahrenheit).

With this information, engineers can use Ansys simulation tools to assess if the particular BGA will fail during operation. To do this, they can plug the CTE and warpage profile into an Ansys Mechanical simulation to see if the warpage experienced at peak reflow temperature (250 degrees Celsius or 392 degrees Fahrenheit) will result in the solder ball separating from the solder paste.

In an in-house example, the total warpage, which is the absolute difference between the maximum negative and positive warpage, is 60 microns. This result can be evaluated by comparing it to the diagonal length of the BGA. If the percentage between the total warpage and diagonal length is below the industry standards of 0.3% or 0.7%, everything should be fine.

A more robust way to evaluate warpage is to place a model of the BGA on top of a model of the PCB and run a thermal-mechanical simulation within Mechanical. The total separation between the BGA and the PCB shouldn’t exceed 100 microns as this is larger than typical solder paste thickness.

A similar approach can be taken to mitigate risks at the customer level. The measured in-plane CTE can be input into Ansys Sherlock and then 1D and 3D simulations can be performed to predict the number of temperature cycles to failure.

DIC in combination with Ansys simulation tools provides engineers with deep insights into component manufacturability and reliability before final design and test. To learn more about DIC, register for the webinar: Ensuring Accurate Material Properties for Simulation with Digital Image Correlation (DIC).

Thursday, June 12, 2025

What is E-mobility & How Do Engineers Design Electronic Cars?

What is E-mobility?

E-mobility, or electromobility, refers to the use of electrified vehicles for transportation purposes. It could be a car, bus, truck, or any other vehicle that is fully or partly electric, like a hybrid.

“E-mobility has become a trend that is on the rise,” says Sandeep Sovani, director of industry marketing at Ansys. “In major cities, you can spot an electric or hybrid every ride. The trend is fueled by many factors including clean energy, petrol costs and climate change fears.”

 
Electric vehicle incentives are proof of the emobility trend.
 
People, governments, automotive companies and the general community are jumping on board—as evidenced by growing incentives like specialized parking spots, tax breaks and vehicle options.

But what are the engineering challenges preventing electric cars from overtaking transportation systems?

What are the Biggest Barriers to E-mobility?

The biggest challenges to e-mobility are energy storage, charging and cost.

“If we look at a car today, we expect it to have two important features that are often taken for granted. One, that it will drive about 400 miles before refueling. And two, that we can charge it at any gas station in about 5 minutes,” says Sovani.

 
Energy storage and battery charging are some of thebiggest emoblity challenges.

Currently, electric vehicle battery technology can’t accommodate these expectations, especially in cold climates that affect performance.

Additionally, most electric cars are luxury items. Considering the costs and travel limitations, it’s no wonder why they haven’t dominated the market.

However, not all is lost. Engineers need to find solutions to these challenges for electric cars to become dominant.

“The majority of consumers don’t care what fuel goes into a car,” predicts Sovani. “You give it the fuel it needs to meet the range and recharging expectations. Once engineers develop electric solutions to meet these expectations, gasoline and diesel will be phased out.”

Will E-Mobility Gain Popularity?

The electric car has gained the public interest a few times throughout history. However, it has yet to dominate the market despite a modern concept being introduced in the 90s and Tesla’s announcement of its electric sportscar in 2006. Currently, they are a niche option, popular enough that major publications, like U.S. News & World Report, see fit to rank the top models available.

 
The Volkswagen I.D. R Electric car is breaking speed records around the world.

However, electric cars have entered into the racing world in a big way. The Volkswagen ID. R Electric car has recently broken records at Pikes Peak and Nürburgring Nordschleife. Sovani points out that on these tracks the battery-powered vehicle had a few benefits when compared to traditional internal combustion engines.

First, its electric powertrain doesn’t need oxygen to operate. So, it can maintain top efficiency in the high altitude of Pikes Peak. Second, the battery only needed to run for the 8-minutes to complete the course. So, engineers could use a lighter battery by pushing it to its thermal and energy capacity limits.

“Pikes Peak was a big victory for e-mobility. The previous record was broken by nearly a minute, which is an incredible feat given that teams usually struggle to improve these records by a few seconds. It is fascinating,” says Sovani. “This raises the profile of electric vehicles in the public eye. There is no one reason people buy electric, but I think this will be one of the things they think about when they do.”

Marco Oswald, technical account manager for Continental at Ansys says, “motorsports are an extreme example of an electric powertrain. Original equipment manufacturers (OEMs) and Tier 1 suppliers are working on mass market technology to bridge internal combustion engines and electric cars. Systems simulations can help to optimize these vehicles for cost, power and efficiency.”

How to Design an Electric Powertrain

Systems simulations are some of the most important tools to designing optimal electric powertrains.

“Recently, we saw a shift from automotive engineers optimizing components to optimizing systems and system integrations,” says Oswald. “Users realize that they have to consider each component as a part of a system and all the multiphysics that entails.”

 
Systems simulations are crucial to designing optimal electric powertrains.

For instance, by modeling their race car’s system, and how it reacts to the track, Volkswagen Motorsports was able to optimize its electric car for Pikes Peak without overengineering the weight of the battery.

However, the design criteria of electric vehicles on racetracks are not the same as those on public roads. For instance, consumer grade batteries will need to last 10- to 15-years, travel hundreds of miles per charge and hundreds of thousands of miles per lifecycle. A far cry from the 8 minutes at Pikes Peak.

Even though the goals have changed, systems simulation can still be applied to the design of commercial cars. Instead of optimizing the systems to an 8-minute racetrack, engineers can optimize the car to the duty cycle it will experience over its lifespan.

To gain insights into the duty cycle of a car, engineers will need to turn to digital twins. Wolfram Schloter, enterprise account manager for Continental at Ansys, elaborates: “Systems simulation is one step away from twin building. Here you can make observations on how a system will behave and compare it to how it is used in the real world.”

Through the digital twin, engineers can gather information on a car’s performance and loads. From there, they can plug that data into systems simulations to gain insights into everything from maintenance cycles to further design improvements.

The Next Step for Emobility

To successfully design cars for e-mobility, companies need to focus on systems engineering. Otherwise, they will be limited to time-consuming and expensive physical prototypes.

Batteries and brakes are complex systems to begin with. Once you realize they are subsystems of the electric powertrain, they become even more complicated.

Oswald says, “Using systems simulation, we can model what happens to each subsystem under different scenarios, weather and driving conditions. You can then gain insights into how the whole system will behave when they all run together.”

 
An engineer works to optimize a battery system. But to optimize this subsystem in the context of the whole, engineers will need systems simulations or expensive physical prototypes.

Despite the potentials of systems simulation, some companies are struggling to keep up with this new design philosophy.

He adds, “Many are still business as usual. If they remain this way, they won’t be competitive. Their time to market will increase while the competition will get faster. That competition will also be able to optimize every system of their product under several conditions. That can’t be done by optimizing on a part-by-part basis. By integrating systems simulation into the design cycle, companies can reduce iteration loops to save time and money.”

Schloter agrees, stating,“When engineers see the effects of using systems simulation for the first time, they are convinced. The main reason companies employ it is to get ahead of their competition.”

What is Crosstalk? Electromagnetic Challenges and Trends

What is Crosstalk?

Crosstalk is the phenomenon where signals from one circuit or transmission line interfere with adjacent circuits or lines. Each signal generates varying electromagnetic (EM) fields. When these signals or circuits are situated close to each other, their EM fields overlap. This interference leads to an unwanted signal coupling causing crosstalk. Crosstalk can occur in electronic systems, such as printed circuit boards (PCBs), integrated circuits (ICs), and communication cables.

 
Engineers can no longer ignore electromagnetic crosstalk. They must understand what it is, how to find it and how to correct it.

The EM signals causing the interference are known as the aggressors, while the EM signal affected by crosstalk is known as the victim. Crosstalk occurs via two mechanisms:

  1. Capacitive crosstalk caused by the electrical field.
  2. Inductive crosstalk caused by the magnetic field.

Examples of Crosstalk

Printed Circuit Boards (PCB): PCBs involve complex circuit designs where multiple traces run close to one another. When a high-frequency signal passes through a trace, it induces a voltage in an adjacent trace due to capacitive or inductive coupling, causing crosstalk.

Integrated Circuits (ICs): Different components and interconnects are tightly packed in an integrated circuit. When an electromagnetic noise generated in one part of the IC (due to transistor switching) couples with the neighboring components, it causes crosstalk, which degrades its performance.

Communication Cables: In communication cables, such as ethernet cables, multiple twisted pairs transmit data. If the twists aren't tight enough or the cables are poorly shielded, signals from one twisted pair can crosstalk into adjacent pairs, leading to data corruption or reduced signal quality.

High-Speed Data Transmission: In high-speed data transmission, such as in HDMI or USB cables, signals can interfere with each other due to their high frequencies. This interference causes crosstalk and degrades the signal quality.

RF Systems: In radio frequency (RF) systems, crosstalk can occur between adjacent antennas or RF transmission lines. This can result in signal interference, reducing the effectiveness of the system.

Crosstalk in SOCs

Engineers developing system-on-chip (SoC) architectures that ignore crosstalk are taking a big risk. Crosstalk can produce electronic design errors that could lead to market delays and cost overruns.

The Challenges of Identifying EM Crosstalk

To help understand the complexities of EM crosstalk analysis, engineers can contrast the problem with capacitive coupling.

Capacitive coupling is strong in proximity and weaker at a distance. So, engineers can safely ignore capacitive coupling between signal lines that are far apart. In contrast, inductive magnetic coupling cannot be ignored between relatively distant signals.

 
It can be hard to determine if electromagnetic crosstalk is the source of an issue.

EM crosstalk is more challenging. First, the symptoms of the problem do not appear in one metric — like timing failure. Instead, crosstalk often manifests as a degradation in some key performance criterion that varies from design to design. Therefore, identifying the issue as crosstalk is the first challenge.

To make matters more complex, crosstalk usually involves unwanted coupling between digital, analog and radio frequency (RF) blocks. Either one can be the aggressor or victim.

EM crosstalk needs to be identified, debugged and resolved differently in different designs. Traditional solutions involve architecture or software tricks that prevent the modes of operation that trigger the problem. However, this is becoming financially and technically untenable as designs have grown in complexity and speed.

The Challenges of Modeling EM Crosstalk

To model EM crosstalk accurately, engineers need to analyze and model a staggeringly complex scope of physical structures, including:

  • The nets of interest
  • The surrounding structures that contribute to crosstalk
  • Power and ground routing layers
  • Bulk silicon substrates
  • Package layers
  • Bond/bump pads
  • Routing layers
  • Seal rings
  • Metal fill
  • Decoupling caps
 
Modeling EM crosstalk can be complex because of all the components that need to be included.

Most of these structures have complex physical layouts that require a large mesh to simulate the resistance, capacitance, inductance, coupling capacitance and mutual inductance.

A second modeling factor that increases the size of crosstalk models is that engineers can’t analyze EM crosstalk by limiting the focus to a small bounding box within the design. Analyzing neighborhood victim signals works well when assessing electrical capacitive coupling. However, magnetic fields can travel along large loops, form outside the immediate neighborhood of a victim signal or encircle the whole layout of the chip.

Additionally, it’s hard to limit the size of a model generated by EM crosstalk tools because it needs to include all the nets that contribute to the crosstalk problem and all the nets and structures that might have an impact on the performance of the circuit.

To be useful downstream in development, the crosstalk model must:

  1. Quickly compute in a simulation program with integrated circuit emphasis (SPICE)
  2. Operate in various nonlinear and noise simulations within a SPICE environment
  3. Exist in a database that crosses the boundaries of blocks or silicon dies

These three requirements are hard to meet given the typical size and complexity of crosstalk models.

The Emerging Need for EM Crosstalk Analysis in SoCs

EM crosstalk is a big concern for engineers because of the demand for electronic systems to increase in bandwidth and decrease in size. This puts high-speed circuitry and high-bandwidth channels in proximity.

 
As electronics become smaller, crosstalk will become a bigger problem. 

Additionally, the continuous increase in internal clock frequencies (5 to 10 GHz) and the increase in data rates (above 10 Gbps) are also fueling the emergence of crosstalk issues.

In short, fast speeds and small electronics create crosstalk; consumer demands are creating SoC trends that make it impossible to ignore parasitic inductance and inductive coupling.

SoC Architecture that Is Prone to Crosstalk

There are many architectural and application design trends that contribute to crosstalk.

For instance, EM crosstalk is frequency dependent. However, engineers cannot analyze EM crosstalk for a simple frequency of interest.

As an example, a clock signal with fast rise and fall times contains significant harmonic frequency components. So, a clock running at 10 GHz has a 5th harmonic frequency component running at 50 GHz.

 
Multiple ethernet lanes on the same system can become a crosstalk nightmare.

Those who target on-chip clock frequencies of 25 GHz, however, will have to think about how to safely model the 3rd harmonic, which falls into microwave frequencies.

EM crosstalk can affect signal magnitudes, or noise level. Hence, the impact of crosstalk is further exacerbated by the decrease in signal voltage levels and the increase in sensitivity to noise driven by lower-power trends in SoC applications.

Ethernet, fiber channel and peripheral component interconnects (PCI) can also be sources of crosstalk. To achieve high data rates, these buses employ multiple serial lanes that operate in parallel. For example, a 100 Gbps ethernet can employ 10 channels that are each running at 10 Gbps. When so many high-speed serial lanes reside in a single system, every lane can be a potential aggressor or a potential victim — a true crosstalk nightmare.

Other architectural trends that increase the likelihood of EM crosstalk include:

  • High-speed analog blocks on one SoC
    • Like phase-locked loops (PLLs) and voltage-controlled oscillator (VCO)
  • Multiple high-speed clock networks on the same chip
    • Clocks don’t need to operate at high frequencies — victim clocks running at 10 GHz can be affected by aggressor clocks running at 2 GHz.
  • RF or high-speed analog blocks adjacent to high-speed digital blocks
    • Shared ground nets and silicon substrates can’t be tapped as a ground.
    • Silicon substrate remains a key noise-propagation channel between blocks.
  • Seal rings and scribe lines inserted by foundries
  • Low power designs with small signal-to-noise margins
  • Sensitive control/reset signals that can be set by crosstalk glitches
  • Integrated fan-out wafer-level packaging techniques
    • Multiple dies in proximity increase the likelihood of EM crosstalk.

Crosstalk Solutions

SoC integration places high-speed digital circuitry, analog, and RF blocks close together and creates opportunities for crosstalk inside those components and across various blocks.

Most electronic design automation (EDA) tools are geared for a specific design type — such as digital, analog, or RF component design. However, crosstalk is not limited by these boundaries. 

 
Ansys Pharos can help engineers identify crosstalk.

IC design engineers should be able to predict the coupling effects during the signoff phase. Using Ansys RaptorH IC, designers can accurately predict electromagnetic coupling effects and easily capture unknown crosstalk among different blocks in the design hierarchy.

What is Reactive Power and How can it be Used to Create a Reliable Electric Grid?

Remember the blackout of August 2003? It was the largest in North American history — affecting over 50 million people across eight U.S. states and two Canadian provinces.

The North American Electric Reliability Council found that a shortage of reactive power — the power needed to keep electric current flowing — was a significant factor that contributed to the blackout.

Renewable energy sources, such as solar power, provide not only electricity, but can also be used to generate reactive power.


To prevent blackouts, renewable energy systems also need smart inverters to control the energy flux and manage the passive power of electrical grids. To meet this need, researchers from the University of Pittsburgh have designed smart inverters that regulate the reactive power and voltage of power grids.

What is Reactive Power?

Reactive power is power that is reflected back to the grid — as opposed to active power, which is power that is consumed by the load.

Similar to the pressure that pushes water through a pipe, voltage acts as the pressure that pushes electrical current through power lines. To do this, voltage draws on reactive power.

Without enough reactive power, voltage drops threaten the grid’s stability. Therefore, reactive power doesn’t actively keep our lights and electronics on. Think of it as the power that the AC grid uses to keep the current flowing to those devices.

So, how do we generate more reactive power? Solar photovoltaic (PV) systems might be the answer. Over 55 gigawatts of solar power generation potential is installed in the U.S. — enough to power over 10 million homes.

Connecting PV power to the electrical grid introduces unique challenges — including overvoltage which requires reactive power absorption. PV power output can also dip due to environmental factors. These voltage swings stress legacy power management equipment leading to high maintenance, operational and replacement costs.

To mitigate these disturbances, utility companies are requiring that PV systems integrate smart inverters to generate or consume reactive power.

Using Smart Inverters to Regulate Reactive Power

Similar to traditional inverters, smart inverters convert direct current (DC) into alternating current (AC). The key difference is their ability to absorb and output reactive power. This process is also known as reactive power compensation.

Tasking inverters with reactive power compensation creates heat which could cause the device to reduce its operational life — or fail.

 
Integrating PV systems with smart inverters may soon become the new standard.

Designing the inverters typically involves building many prototypes and performing lengthy, expensive experiments. However, with simulation, the University of Pittsburgh’s researchers sought to circumvent this substantial effort.

Simulating Reactive Power Stresses on Smart Inverters

Using multidomain system simulation, (now contained in Ansys Twin Builder) the University of Pittsburgh’s researchers developed electrothermal models to evaluate the smart inverter’s circuits and control algorithms.

 
Researchers optimize PV smart inverters, enabling them to manage reactive power stresses.

When the researchers modeled the inverter, the electrical performance matched the expected performance. This comparison proved that the models provide accurate predictions of the inverter’s electrical and thermal performance.

The researchers then conducted characterization studies to reduce the need to physically prototype the inverter’s thermal dynamics — resulting in significant cost savings.

Simulation also enabled the researchers to evaluate different design configurations. Studying these configurations gave the researchers the ability to optimize the inverter’s critical trade-off between reactive power performance and device lifetime.

How to Define Your Own Turbulent Flow Equation for CFD Modeling

Engineers have used computational fluid dynamics (CFD) to model turbulent flows since the early days of the computer. Though performance, speed and accuracy has improved since then, one of the greatest challenges from that past still remains: How do you find the right turbulent flow equations to model your system?

 
Comparison between turbulence models: two iterations of GEKO (top, left and right), the standard k-ε (bottom, left) and sheer stress transport (SST) (bottom, right). 
 
Traditionally, engineers would run experiments to see which turbulence model offered results that were closest to the real-world physics being simulated.
 
However, this isn’t always the most accurate procedure to follow. First, the experimental data would often be based on simplifications of the system being simulated. Second, there is no way to guarantee that the turbulence model will be accurate once the simulation becomes more complex and the flow regime is extended.
 
Furthermore, comparing the turbulence models can be complex as each one needs to be set up differently.
 
Enter Ansys Fluents’ generalized k-omega (GEKO). Instead of hoping GEKO fits the test data of your simulation, Ansys users modify it until it does. This way, engineers are not picking the best one-size-fits-all model. They are tailoring the turbulence model to fit their needs. 

How Engineers Customize GEKO so the CFD Model Matches the Turbulent Flows

Engineers can use six free coefficients to tweak GEKO to match their experimental flows. They are:
  • CSEP — customizes the flow separation from smooth walls.
  • CNW — customizes the heat transfer for near wall flows.
  • CMIX — customizes free shear flow mixing.
  • CJET — customizes free jet spreading rates.
  • CCORNER — customizes corner flow separation.
  • CCURV — customizes vortex flows.
  
GEKO simulation of the flow around a triangular cylinder using various CMIX values.
 
The idea is to see how GEKO performs, compare it to the test data, and tune these six coefficients until your version of GEKO matches your data.

The Autonomous Vehicle System iDAR Mimics Human Eyes

For autonomous driving to be safe, engineers need to design systems and sensors that can detect, interpret and react to hazards on the roa...