Exploiting Log-Convexity in Aircraft Design Optimization - Global Optimization using Optimal Decision Trees

Friday, October 15, 2021 - 12:00pm to 1:00pm

Event Calendar Category

Uncategorized

Speaker Name

Cody Karcher
 & Berk Ozturk


Affiliation

MIT ACDL

Building and Room number

32-116

Zoom meeting id

91862758072

Join Zoom meeting

https://mit.zoom.us/j/91862758072

Abstract

Exploiting Log-Convexity in Aircraft Design Optimization" Cody Karcher
Abstract:  Fast and efficient aircraft design optimization has been made possible through the use of Geometric Programming (GP).  Despite many advantages, the underlying mathematics of the GP formulation impose a harsh limit on the types of models that can be used for discipline analyses like aerodynamics, structures, and propulsion.  This work offers two new approaches to break through the fundamental limitation.  First is a new class of surrogate models that can be fit to high fidelity data while remaining compatible with the Signomial Programming (SP) extension of Geometric Programming.  Second is a new algorithm derived from Sequential Quadratic Programming (SQP) that allows an existing analysis model to be directly tied into the optimization algorithm with no need for a surrogate model.  This talk will motivate the underlying mathematical theory of both new methods and show some initial applications to aircraft design optimization.
 
"Global Optimization using Optimal Decision Trees". Berk Ozturk
Abstract:  Optimization is key to the conceptual design of aerospace systems. One major challenge in conceptual design is being able to optimize over constraints, functions or data that do not fit into certain forms. In increasing order of difficulty, design constraints may be mathematically inefficient (eg. non-convex constraints), inexplicit (eg. solutions of PDEs), or unqueriable (eg. results of non-repeatable experiments). The current state-of-the-art methods in aerospace design optimization are primarily gradient-based and heuristic methods. While these methods have seen their capabilities grow, they are limited to local and low dimensional optimization respectively, and struggle with a subset of the aforementioned constraint classes.
           Leveraging the dramatic speed improvements in mixed-integer optimization (MIO) and recent research in machine learning (ML), I propose a new method to learn MIO-compatible approximations of difficult optimization problems using optimal decision trees. This approach only requires a bounded variable domain and data over unqueriable constraints, and can address all three difficult constraint classes. The MIO approximation is solved efficiently to find a near-optimal, near-feasible solution to the original optimization problem. Then, the solution is converged to a locally feasible and optimal solution using a series of projected gradient descent iterations. The method is tested on a number of numerical benchmarks from the literature as well as some real world design problems, demonstrating its promise in finding global optima efficiently.