Matthew Kelly
Matthew Kelly
  • 44
  • 145 529
Turning a small rosewood top on a wood lathe
In this video I show one technique for making a small rosewood top on the wood lathe.
Переглядів: 485

Відео

Inkscape Tutorial: Simple Photo Collage
Переглядів 8 тис.7 років тому
In this video I demonstrate how to make a simple photo collage using Inkscape. I'll do my best to answer all questions left in the comments section! Topics Covered: - importing photos - rectangle crop (with / without rounded corners) - circle / ellipse crop - polygon crop - basic image transforms - basic keyboard shortcuts - alignment - object depth - export as png Download inkscape (Windows / ...
Ranger Simulation, Small Push
Переглядів 6348 років тому
A simulation of the Cornell Ranger walking robot, walking on flat ground. The robot receives a small (unexpected) push a few seconds into the simulation. The simulator can be downloaded at: github.com/MatthewPeterKelly/RangerSimulation
Ranger Simulation, Flat Ground, Starting from Rest
Переглядів 2398 років тому
A simulation of Ranger walking on flat ground, starting from rest. The controller was designed using robust optimization. This simulation is conducted under ideal conditions.
Ranger Walking in Simulation, Perfect Conditions, Slow motion
Переглядів 2638 років тому
Ranger is walking here using closed-loop control, with the controller designed using robust optimization. The simulator can be downloaded at: github.com/MatthewPeterKelly/RangerSimulation
Introduction to Trajectory Optimization
Переглядів 88 тис.8 років тому
This video is an introduction to trajectory optimization, with a special focus on direct collocation methods. The slides are from a presentation that I gave at Cornell, linked here: www.matthewpeterkelly.com/tutorials/trajectoryOptimization/cartPoleCollocation.svg The journal paper version of this talk, to be published by SIAM Review in December 2017: www.matthewpeterkelly.com/research/MatthewK...
Which way does the spool roll?
Переглядів 2,8 тис.8 років тому
This is a basic dynamic demo: when you pull on a rope connected to a spool of wire, which way does the spool roll? The answer is somewhat unintuitive. Props: twine, Lego wheels, and a flat table.
Pedal-operated F-valve for trombone.
Переглядів 8288 років тому
I'm working on a side project to make a mechanism that operates the F-valve on a trombone using a foot pedal. It is designed for use by trombonists with a lack of dexterity in their left hand. The key idea is to use a bowden cable to connect a foot pedal to the valve.This is a first prototype.
Driven Damped Pendulum - Simulation
Переглядів 1,4 тис.8 років тому
Each dot is a point in the phase space (angle vs rate) of the driven damped pendulum system, where x = angle, dx = rate, and ddx = angular acceleration. ddx = cos(t) - 0.1*dx - sin(x) This simulation is done in for 15 periods of the forcing function, and the color of each dot gives its energy at time = 0.
Animation of a simple cart-pole swing up
Переглядів 5 тис.8 років тому
Simple animation of a cart-pole swing-up. This is a play-back of a minimum force-squared optimal trajectory. Source code for trajectory optimization of the cart-pole: github.com/MatthewPeterKelly/OptimTraj/tree/master/demo/cartPole Trajectory optimization tutorial: ua-cam.com/video/wlkRYMVUZTs/v-deo.html Animation in Matlab: github.com/MatthewPeterKelly/IntroMatlabDynamics/tree/master/Animation...
Rheometer Control Demo
Переглядів 1688 років тому
MIT Non-Newtonian Fluid Lab, Filiment Stretching Extensional Rheometer. I designed the high-speed feed-back controller that is moving the two stages to produce a constant strain rate in the fluid specimin to be tested.
Cornell Ranger, Walking down hallway, MDP controller
Переглядів 7008 років тому
Cornell Ranger, walking down the hallway outside of lab. The high-level controller was designed by modelling the walking gait as a Markov Decision Process, and then solving for the optimal control policy using value iteration. The high-level controller runs on every other step, adjusting the parameters in that are used by the lower-level controllers. The angle sensor in the steering motor is br...
RangerSimulation_MDP_Wavy_Ground
Переглядів 1158 років тому
Simulation of Cornell Ranger walking over "wavy" ground to demonstrate robustness to disturbances. The ground has a peak to trough height of 3cm, with peaks having a distance of 4 meters between them. I designed the high-level controller by modelling the walking behavior as a Markov Decision Process, and then solving for the optimal policy using value iteration.
Ranger Walking - simple test
Переглядів 7248 років тому
A short test of Ranger walking in the hallway, using a simple controller that I designed.
Ranger Simulation - simple optimal controller
Переглядів 3538 років тому
Ranger Simulation - simple optimal controller
Tour of the chicken coop
Переглядів 2258 років тому
Tour of the chicken coop
Building the Chicken Coop
Переглядів 1768 років тому
Building the Chicken Coop
Cornell Ranger - Simulated Walk - Ideal Model
Переглядів 2808 років тому
Cornell Ranger - Simulated Walk - Ideal Model
Slow-motion wood lathe
Переглядів 3939 років тому
Slow-motion wood lathe
Cart-Pole Dynamics -- Part 2 of 2
Переглядів 6 тис.9 років тому
Cart-Pole Dynamics Part 2 of 2
Cart-Pole Dynamics -- Part 1 of 2
Переглядів 10 тис.9 років тому
Cart-Pole Dynamics Part 1 of 2
Non-linear robust control for inverted-pendulum 2D walking
Переглядів 2 тис.9 років тому
Non-linear robust control for inverted-pendulum 2D walking
Mass Matrix of contact point - animation
Переглядів 1959 років тому
Mass Matrix of contact point - animation
Ranger Walk - dumb controller
Переглядів 32310 років тому
Ranger Walk - dumb controller
Ranger Walk - dumb controller - annotated
Переглядів 24310 років тому
Ranger Walk - dumb controller - annotated
HeuristicControllerWavyGround
Переглядів 30310 років тому
HeuristicControllerWavyGround
SimpleBiped_WalkJumpWalk
Переглядів 23410 років тому
SimpleBiped_WalkJumpWalk
BallBounceDemo
Переглядів 16210 років тому
BallBounceDemo
TireBounceDemo
Переглядів 11110 років тому
TireBounceDemo
ClothStiffnessDemo
Переглядів 9710 років тому
ClothStiffnessDemo

КОМЕНТАРІ

  • @fifikeo
    @fifikeo 3 місяці тому

    Bro dropped a banger and left😢

    • @MatthewKelly2
      @MatthewKelly2 3 місяці тому

      Sorry! I'm hoping to make more videos someday... but yeah, it was much easier to find time for this sort of thing in grad school.

  • @furiousspirit
    @furiousspirit 8 місяців тому

    Just wow. Thank you so much!

  • @BernhardWullt
    @BernhardWullt 10 місяців тому

    Great presentation! Thanks!

  • @vsaihitesh2237
    @vsaihitesh2237 10 місяців тому

    This is such an elegant visual explaination for an abstract problem. I finally find it peaceful to have understood this topic, thank you very much!

  • @jcamargo2005
    @jcamargo2005 11 місяців тому

    Thank you for this excelent presentation!

  • @crusader0775
    @crusader0775 Рік тому

    I am glad I found this. I am robotics newbie trying to learn trajectory optimization and optimal control. Great presentation, hats off. Appreciate your efforts, Thank you.

  • @izharulhaq2436
    @izharulhaq2436 Рік тому

    Amazing introduciton to the field of trajectory optimzation. This lecture should be made complusory for all student in the field. Thanks

  • @razmo9396
    @razmo9396 Рік тому

    POV : tu t'es perdu pdt ton T.I.P.E

  • @Nomolosos89
    @Nomolosos89 Рік тому

    @MatthewKelly2, thank you so much for your time, excellent explanation. I also liked the presentation, very clean and well organized. I would very much like to know if there is a template for it.

    • @MatthewKelly2
      @MatthewKelly2 Рік тому

      Thanks! There isn't a template, but you can download the source code (SVG + HTML) for the presentation using the link in the description. It is written in Inkscape, with animations using Sozi.

  • @pythonking_stem1527
    @pythonking_stem1527 Рік тому

    14:17 Can you give some reference on indirect methods?

    • @MatthewKelly2
      @MatthewKelly2 Рік тому

      Good question! I actually don't know any off the top of my head. I would start by looking into the references of the references at the end of this video. Let me know (or reply here) if you find a good one!

    • @pythonking_stem1527
      @pythonking_stem1527 Рік тому

      @@MatthewKelly2 sure thank you for the response

  • @pythonking_stem1527
    @pythonking_stem1527 Рік тому

    3:58 @MatthewKelly2, Can you tell me where we use this closed loop optimal control?

    • @MatthewKelly2
      @MatthewKelly2 Рік тому

      The "closed loop optimal control" problem is traditionally set up by solving the "Hamilton Jacobi Bellman" HJB equation numerically. This is only practical for low-dimensional problems due to the "curse of dimensionality". More recently, there has been a ton of progress made in approximate solutions to this problem using reinforcement learning. There are lots of good resources on that, and I'm not a real expert on RL. The most impressive work that I've seen there is by the Robot System Lab at ETH Zurich, where they used it to control a quadruped robot walking over rough terrain. More details here: rsl.ethz.ch/research/researchtopics/rl-robotics.html

    • @pythonking_stem1527
      @pythonking_stem1527 Рік тому

      @@MatthewKelly2 Thanks for the link...I find your way of teaching very interesting ...

  • @pythonking_stem1527
    @pythonking_stem1527 Рік тому

    43:00 is it dirk hall?

  • @hiaaa9394
    @hiaaa9394 Рік тому

    Thank you Kelly. I still have a question. If the decision variables in the trajectory optimization problem contain variables other than control variables, state variables and time, can optimtraj solve this problem? Or is there any other way to solve it? Such as the dual multiplier introduced to better solve some nonconvex optimizations.

    • @MatthewKelly2
      @MatthewKelly2 Рік тому

      In general, you can add just about anything you want as a decision variable in a trajectory optimization problem. One common example would be slack variables for a constraint, or a parameter in the dynamics model. Specifically in my OptimTraj Matlab package there is some limitation on the types of decision variables, which I did mostly to keep the user interface simple enough to use in an educational context. The GPOPS-II optimization package (also in Matlab) has a bit more flexibility, for example, allowing model parameters to be decision variables. I'm sure other packages have more flexibility as well. Or you could code up your own transcription, and then the sky is the limit. The key thing to keep in mind is that you will need to be careful that you still have a "well behaved" problem. For example, you'll still need consistent / continuous gradients of those new decision variables.

  • @jasonsejkora4578
    @jasonsejkora4578 Рік тому

    I am looking at making a bunch of these for a barn in my backyard. I have a ladder just like your love to see a close-up of the joinery and the process of building the sled for the mill!

  • @ethanepp849
    @ethanepp849 2 роки тому

    You mentioned you had some references for working on multi stage problems and I was wondering if you would be able to link them? If it helps the specific problem I am working with the OptimTraj library you made and beginning with working on is a sort of toy problem that is just a point mass navigating a 2d obstacle course with obstacles it can't cross as path constraints and a series of points that is has to reach along its path. I am very new to this sort of thing so any comments or references are greatly appreciated. Thanks in advance, and great video!

    • @MatthewKelly2
      @MatthewKelly2 2 роки тому

      That sounds like an interesting problem to work on and learn from. I recommend starting simple and then adding constraints until you get to solving the full problem. For multi-phase references, look up the GPOPS-II user manual and associated papers by Anil Rao et al. There are definitely others out there, I just don't know them off the top of my head. The big picture concept with phase constraints is that you are connecting multiple "mini trajectory optimization problems" into one larger problem using "linkage constraints". The linkage constraints connect the boundary state on one phase to the boundary state of another phase. Interestingly, there is nothing that requires them to be sequential -- in general they can form a graph. The main challenge comes in setting up a clean user interface for specifying them, which is why they are not included in OptimTraj -- I wanted to keep it as simple as possible to use. That being said, they are not too challenging to hack into that software for a specific case. One side note -- my OptimTraj package is "optimized" for being easy to read and use as a learning tool, while still being fast enough to use for toy problems. The problem you are setting up is complicated enough that you are probably going to start running into some of the limitations of both OptimTraj and Matlab itself. If you're tied to Matlab, then I suggest switching to GPOPS-II, which is heavily optimized for performance on larger multi-phase problems. If you're not tied to Matlab, then I would suggest hunting around for a C++ implementation (perhaps with python bindings, if they exist) that would be faster.

  • @keithshockley3443
    @keithshockley3443 2 роки тому

    What model of horn are using?

    • @MatthewKelly2
      @MatthewKelly2 2 роки тому

      It is a Benge 165F

    • @keithshockley3443
      @keithshockley3443 2 роки тому

      @@MatthewKelly2 Would you recommend this brand or something of higher quality and durability. I have a moz tenor that doesn’t sound that great and the triggers keep getting stuck.

    • @MatthewKelly2
      @MatthewKelly2 2 роки тому

      @@keithshockley3443 - I like this horn, but it is definitely "intermediate level". The trigger is reliable, but makes a bit of noise. The tolerances on the tuning slides are not great and they tend to get jammed if you're not careful. That being said, it sounds nice and is pretty indestructible. I got it in middle school and used it through high school. I got a much nicer Jupiter horn when I graduated. It was better in many ways, but I found the thayer valve continually got stuck and was hard to maintain. I don't play much anymore, so I eventually sold the Jupiter horn and keep the Benge around for playing occasionally. I would recommend the Benge horn for a beginner who is looking for an affordable upgrade from a student model, but not for someone who is looking to play at a higher level long term. For that I would recommend getting a slightly higher quality horn but but with the same type of valve as the Benge has. With better tolerances and the right padding on the stops the valve should be quiet and easy to maintain. The key features I would look for (apart from sounding good to you) would be nice tolerances on both the main and tuning slides. Good luck!

  • @keithshockley3443
    @keithshockley3443 2 роки тому

    Nice pedal contraption! What songs are you playing??

    • @MatthewKelly2
      @MatthewKelly2 2 роки тому

      Mostly just different warm-up routines. The one “real” song near the end is a an excerpt from “simple gifts”. Glad that you like it!

  • @yosinhu5952
    @yosinhu5952 2 роки тому

    Thank you,Great work!Hope u got a good day!

  • @yosinhu5952
    @yosinhu5952 2 роки тому

    国内有伪谱法的教程吗?想用cpp实现一下,国人请联系我,可有偿

  • @yuelinzhurobotics
    @yuelinzhurobotics 2 роки тому

    Thank you, It's still very helpful right now. I also have a question, why we also use the integration of torque(or input) as the objective function? What is the basis for doing like that? is it just like (let F=ma, f=1,m=1,the power is P=FS=a*1/2a*t^2). Thank you Kelly, if you have time to answer my question.

    • @MatthewKelly2
      @MatthewKelly2 2 роки тому

      There are some more-technical ways to show that the "integral of control effort squared" is a good objective function for second-order dynamical systems, but here is an intuitive explanation. The quadratic cost means that it is "expensive" to use a large control effort. Let's imagine that the controller wants to deliver a fixed impulse. If the cost was linear, then there would be no reason to prefer a 0.1s * 10N impulse vs a 1s * 1N impulse. With a quadratic cost the 1s * 10N impulse is "less expensive". Overall, this means that the "control-squared" objective tends to produce smooth solutions. This is important because the discrete approximation of the trajectory relies on the assumption that a polynomial spline can approximate the solution... which is only true if the solution is "smooth". It also happens that quadratic cost functions have really nice gradients, which helps the optimization numerically.

    • @yuelinzhurobotics
      @yuelinzhurobotics 2 роки тому

      @@MatthewKelly2 Hi, Kelly, Thank you for your clear explanation, it's very helpful for me.

    • @yuelinzhurobotics
      @yuelinzhurobotics 2 роки тому

      @@MatthewKelly2 Hi, Kelly, I also have another question, can we just understand the closed-loop solution is a global solution? If yes/no, can you give me simple reference material? Thank you very much.

    • @MatthewKelly2
      @MatthewKelly2 2 роки тому

      @@yuelinzhurobotics - The type of trajectory optimization explained in this video can be thought of as a local solution, which could be evaluated to produce an open-loop controller. If you want the full global solution then you want a different technique. Look up "solving the Hamilton Jacobi Bellman (HJB) equations. You'll find that they become intractable for most non-trivial solutions. The general approach is to use reinforcement learning to approximate the global solution. This is an active area of research, and I'm not an expert -- lots of neat stuff to learn there. Another related thing is model-predictive control. That produces a local closed-loop controller (rather than open loop). The idea is that you continually solve a trajectory optimization (such as the cart-pole) swing up from the measured state, which then allows you to feed-back and stabilize errors. This is difficult to implement in practice because you need to solve the optimization very quickly (in real time, in your control loop). That being said, it can be done, once you get all of the details right.

  • @shenge5347
    @shenge5347 2 роки тому

    This is one of the clearest videos I have found on trajectory optimization. Thank you!

  • @awais_arshad
    @awais_arshad 2 роки тому

    Thanks a lot Mathew. This tutorial is very helpful.

  • @herrefaber6600
    @herrefaber6600 2 роки тому

    It is very rare that someone finds the right level and tone in a video like this. Great job!

  • @claremacrae
    @claremacrae 2 роки тому

    Fantastic clear explanations - very useful indeed. Thank you.

  • @maratkopytjuk3490
    @maratkopytjuk3490 2 роки тому

    Amazing video, enjoyed watching it! I like the clean formulation and the high level view.

  • @rakmo97
    @rakmo97 2 роки тому

    Can you leave tf unconstrained from T? Is it possible that a more optimal solution takes a different amount of time from 2 seconds?

    • @MatthewKelly2
      @MatthewKelly2 2 роки тому

      Excellent question. Yes, you can make the duration of the trajectory a decision variable, and in many cases that allows the solver to find a "more optimal" solution. One detail is that the total duration then couples nearly every constraint in the optimization problem, making the overall problem more challenging to solve. You can also end up with some counter-intuitive behaviors if you don't set up the objective function and constraints carefully. One related topic: it is generally a bad idea to make the duration of all segments into decision variables, as the solver will often get stuck (Bett's discusses this in his book in some detail). Generally you fix the "mesh fraction" allocated to each segment, and then allow a small number of "phase durations" to be decision variables (see GPOPS-II documentation for more on this).

  • @dongdongzheng5990
    @dongdongzheng5990 2 роки тому

    Thank you for the presentation👍👍👍👍

  • @emanuel4516
    @emanuel4516 2 роки тому

    Hi Matthew, how would you implement a constraint on a intermediate point of the trajectory? Let's say you want to avoid the tree in the Cannon Shooting example you brought on your website. Is it possible to do this with a Single Shooting method or do you have to rely on Multiple Shooting or Direct Collocation method?

    • @MatthewKelly2
      @MatthewKelly2 2 роки тому

      You can add a constraint on an intermediate point in either method, although the gradients are much nicer in the direct collocation formulation. The approach is to select one or more knot points at which to apply the constraint, and then append them to the optimization problem. You always need to watch out for "tunneling". For example, if you have widely spaced collocation points, you can compute a "feasible" trajectory that passes directly through a thin wall if each of the collocation points is "not in collision" with the wall. There are various heuristics for avoiding this, most either based on "padding" the constraint (making the wall wider than it really is) or adding extra collocation points near obstacles.

    • @emanuel4516
      @emanuel4516 2 роки тому

      @@MatthewKelly2 thanks for the answer! I am thinking on how to implement this intermediate constraint though. Maybe using a conditional statement (in the tree example one could do something like this: IF x=obstacle_coordinate AND y<= tree_height THEN Ceq=1 ) ? It doesn't seem very efficient to me though..

    • @MatthewKelly2
      @MatthewKelly2 2 роки тому

      @@emanuel4516 - Correct. The 'if' statement would be no good, as it causes a discontinuity in the sparsity pattern of the constraint jacobian. You'll want to add a constraint something like: 'distance_to_obstacle(state(i)) > 0', and then apply that that constraint for every single point on the trajectory that could come in contact with the tree. This allows the NLP solver itself to figure out if the constraint is active or not. The 'distance_to_obstacle' function should be continuous (no if statements).

  • @mingshey
    @mingshey 3 роки тому

    So beautiful! Well done. I love it that you included initial states of broad range of momenta, evolving into narrow final range of momenta.

  • @sonyerric2
    @sonyerric2 3 роки тому

    Hi, is it possible to incorporate obstacle avoidance for the direct collocation methods presented here? Are there some resources that talk about this or certain keywords I could search for ??

    • @MatthewKelly2
      @MatthewKelly2 3 роки тому

      Yes, although the details start to get complicated. The general idea is to add a distance-squared constraint at each collocation point. You need to be somewhat careful about avoiding "tunneling". Depending on the complexity of your kinematics, these additional constraints can be expensive to evaluate. I can't think of any good references off the top of my head, but I'm sure that they're out there.

  • @yalunwen486
    @yalunwen486 3 роки тому

    Thank you Kelly for this great presentation. Just wondering what did you use to insert equations into the presentation? Thanks again.

    • @MatthewKelly2
      @MatthewKelly2 3 роки тому

      If you follow the first link (www.matthewpeterkelly.com/tutorials/trajectoryOptimization/cartPoleCollocation.svg), you'll discover a big SVG file that has all of the slides in it. I created this file using Inkscape. There is a LaTeX plugin for inkscape, although it was pretty unreliable at the time of writing this presentation. The animations (really just links to different views onto each slide) were auto-generated using Sozi, and animation front-end for Inkscape.

  • @FreeFallin20383
    @FreeFallin20383 3 роки тому

    Hey Matt, do you happen to know what the results were for minimum force? I solved this using Python and wanted to compare. Thank you.

  • @user-nv2yo3fn5i
    @user-nv2yo3fn5i 3 роки тому

    Thank you so much! It really helps me a lot. Recently, I am trying to learn the pseudo-spectral method for optimal control problem. Can you recommend some good materials? Many thanks!

  • @sambrothie
    @sambrothie 3 роки тому

    Thank you so much for all of the incredible resources you have created, Matthew!

  • @FreeFallin20383
    @FreeFallin20383 3 роки тому

    Awesome video. I am trying to do some trajectory optimization in Python. Do you know if Scipy has trapezoidal or Hermite Simpson? Thank you

    • @MatthewKelly2
      @MatthewKelly2 3 роки тому

      Sorry - I don't have much experience using Scipy for trajectory optimization! Let me know if you find a good library.

    • @sambrothie
      @sambrothie 3 роки тому

      You're welcome to check out the Python package I've been working on, Pycollo. The GitHub is here: github.com/brocksam/pycollo. It's a work in progress but it can quickly and efficiently solve this problem among many others (in fact, this problem is implemented as an example)! I'm working on making it conda-installable right now.

    • @FreeFallin20383
      @FreeFallin20383 3 роки тому

      @@MatthewKelly2 No problem!

    • @FreeFallin20383
      @FreeFallin20383 3 роки тому

      @@sambrothie Wow thank you this is very neat as well. Currently I am also trying to solve it a longer way so I can better learn about how this all works. Your package is a great tool for me to check. Do you happen to know what the equality constraints (final/initial state) would look like if I were to use scipy minimize?

  • @jonathancangelosi2439
    @jonathancangelosi2439 3 роки тому

    I'm currently trying to learn about pseudospectral methods for trajectory optimization and this is a fantastic intro to the topic. Thank you!

  • @pedrocalorio1655
    @pedrocalorio1655 3 роки тому

    in the trapezoid method, where you assume that between each data point there is a line connecting them, this isn't a problem in optimization once the first and second derivatives of this function are NOT continuous?

    • @MatthewKelly2
      @MatthewKelly2 3 роки тому

      Good question, if I'm understanding it correctly. Let me try restating it: How can we use a non-smooth function to represent the trajectory in a nonlinear program that requires everything to be smooth? The key here is that the trajectory (piecewise linear in time) is discontinuous in time. The NLP doesn't care at all about time: it only cares about decision variables. If you carefully look at the gradients, you'll find that the gradient of the trapezoidal discretion with respect to the decision variables is continuous. Another possible interpretation: What if I need the solution trajectory to be continuous with respect to time? Then you can use a higher-order method that has a solution trajectory that is continuous with respect to time.

    • @pedrocalorio1655
      @pedrocalorio1655 3 роки тому

      @@MatthewKelly2 this actually makes a lot of sense, the NLP solver being independent of the time. Thanks once again for the attention! Learning a lot from!!

  • @pedrocalorio1655
    @pedrocalorio1655 3 роки тому

    How to handle tractory optimization, or any gradient-based optimization when the input law is discontinuous via an if statement?

    • @MatthewKelly2
      @MatthewKelly2 3 роки тому

      It depends what you mean by "input law is discontinuous". Let's be more specific. If the objective or constraint function has a discontinuity, then you've got trouble. This will cause a jump in the gradients that will break most continuous optimization solvers. The general "trick" is to reformulate your problem in such a way as to inform the solver about the discontinuity and let it manage it via switching the set of active constraints. This is often done using slack variables. A common example is to replace an absolute value function with two positive slack variables and an equality constraint, as is described in Bett's book (see references at end of presentation or in my journal paper linked in the description). I If the discontinuity occurs in the solution of the trajectory optimization, which can happen for some smooth optimization problems, then you can also have problems. The NLP will often converge, but your discretization will often be bad. The solution here is to iteratively adjust your mesh, this can be done manually, or through adaptive meshing. See the research by Anil Rao et al. on HP adaptive meshing that is used in GPOPS for a concrete example and thorough analysis.

    • @pedrocalorio1655
      @pedrocalorio1655 3 роки тому

      @@MatthewKelly2 thank you so much for your complete answer!! However, I don’t know if I’m mixing things here, but the slack variables is part of the KKT conditions for optimality, right? And what if I cannot reformulate my NLP to make it continuous for my feasible set, this means that I’ll have to go for stochastic methods of optimization? Sorry for coming with more questions....

    • @MatthewKelly2
      @MatthewKelly2 3 роки тому

      @@pedrocalorio1655 Some NLP solvers use slack variables internally, but you don't need to worry about that. The slack variables that I'm talking about would be added directly as decision variables while formulating the NLP, typically as extra state or control variables. The `minimumWork/MAIN_cstWork.m` example in my OptimTraj software shows a simple example of using slack variables to represent an absolute value function. Similar tricks can be used for min() and max(). There are also various ways to "smooth out" functions and interpolation from tables. Iterative methods can be made "smooth" by using a fixed number of iterations. You can use stochastic optimization, but I've never gotten that to work reliably. The problem is that trajectory optimization requires a large number of equality constraints for the defects, and stochastic optimizer have a really hard time with equality constraints. Put differently, the whole idea behind collocation methods (like I describe here) is to put the optimization into a format the makes it easy for sparse gradient-based solvers to optimize. If you switch to a stochastic solver, then you would probably want a different formulation entirely, something closer to single shooting (which has its own set of challenges).

  • @mortezakhosrotabar
    @mortezakhosrotabar 3 роки тому

    Thank you so much Matthew for your presentation. It was very useful.

  • @trunc8
    @trunc8 3 роки тому

    I watched this twice to soak all the information. Thank you so much for the amazing lecture! What's the difference between knot points and collocation points?

    • @MatthewKelly2
      @MatthewKelly2 3 роки тому

      Good question! Knot points are the points along the trajectory that separate continuous polynomial sections. Put differently, they are the places where there is a discontinuity in some derivative. This terminology is more broad than trajectory optimization: it comes from polynomial splines. If the spline has N segments, then it will have N+1 knot points, including the boundaries of the spline. Collocation points are those points where the system dynamics are evaluated, and represent the points that are used to discretize the system dynamics. The "collocation error" will be zero at these points for a valid solution to the NLP. The collocation and knot points often overlap, especially for low-order methods. For example, in trapezoidal collocation, the knot and collocation points are identical. In backwards euler discretization, all collocation points overlap with knot points, but there is a single knot point (the first point on the trajectory), that is not a collocation point. For the hermite-simpson method, all knot points overlap with collocation points, but there is an extra collocation point in the middle of each segment that does not overlap with a knot point.

    • @trunc8
      @trunc8 3 роки тому

      @@MatthewKelly2 That clears my doubt. Thank you!

  • @abcddd580
    @abcddd580 3 роки тому

    A very lucid, consice, yet sufficiently detailed explanation. Great!

  • @binxuwang4960
    @binxuwang4960 3 роки тому

    This presenyation is so well made thanks a lot!

  • @leejonglek
    @leejonglek 3 роки тому

    It's fancy. How can I do this? I failed this with a linearized LQR controller. I want to know what I have to study to do this. I hope this reaches you.

    • @MatthewKelly2
      @MatthewKelly2 3 роки тому

      This animation is playing back the solution to a trajectory optimization problem. If you follow the "source code" link in the description it will bring you to OptimTraj, a trajectory optimization library that I wrote which includes this problem as an example. There is also a link to a tutorial that explains some of the concepts behind trajectory optimization. You can definitely make a LQR controller stabilize this sort of trajectory. The trick is to linearize around the trajectory, not the final solution. Check out Russ Tedrake's Underactuated Robotics class at MIT, where he discusses this topic in depth: underactuated.mit.edu/lqr.html#example2

    • @leejonglek
      @leejonglek 3 роки тому

      ​@@MatthewKelly2 Thank you much for your information!

    • @leejonglek
      @leejonglek 3 роки тому

      @@MatthewKelly2 Thank you so much again. Until now I didn't know what I have to study. Thank you very much for introducing about this field.

  • @sonyerric2
    @sonyerric2 3 роки тому

    Thank you for the great video, I was playing around with the demos found on Matlab and was wondering (regarding a simple pendulum attached to a cart) if the length of the pole can be varied with respect to time? If so how can I make the necessary modifications to allow this changing length

    • @MatthewKelly2
      @MatthewKelly2 3 роки тому

      Good question! In the demo that I wrote, it is a hard-coded assumption that the pole length is fixed. That being said, you can definitely make a version of the dynamics function that supports a time-varying pole length.There are three ways that I could imagine doing it. (1) Add a passive element, such as a spring, that applies along the pole and allows the mass to move along the length. This would require writing out a new dynamics equation, but would not require adding new inputs to the function. (2) Explicitly make the length of the pole a function of time and then pass that function in as a parameter. Then take the time that is passed into the dynamics function and use it to evaluate the pole length (and its derivatives). Then you need to update the dynamics to properly account for accelerations related to changes in pole length. (3) Add an actuator that can control the length of the pole (easiest to use a force actuator here), and then update the dynamics accordingly. This would allow the optimization to "select" the correct length for the pole. In all cases you would need to update the dynamics of the system, but that shouldn't be too tricky as it is a relatively simple system. Good luck!

    • @MatthewKelly2
      @MatthewKelly2 3 роки тому

      Good question! In the demo that I wrote, it is a hard-coded assumption that the pole length is fixed. That being said, you can definitely make a version of the dynamics function that supports a time-varying pole length.There are three ways that I could imagine doing it. (1) Add a passive element, such as a spring, that applies along the pole and allows the mass to move along the length. This would require writing out a new dynamics equation, but would not require adding new inputs to the function. (2) Explicitly make the length of the pole a function of time and then pass that function in as a parameter. Then take the time that is passed into the dynamics function and use it to evaluate the pole length (and its derivatives). Then you need to update the dynamics to properly account for accelerations related to changes in pole length. (3) Add an actuator that can control the length of the pole (easiest to use a force actuator here), and then update the dynamics accordingly. This would allow the optimization to "select" the correct length for the pole. In all cases you would need to update the dynamics of the system, but that shouldn't be too tricky as it is a relatively simple system. Good luck!

    • @sonyerric2
      @sonyerric2 3 роки тому

      @@MatthewKelly2 I will try to tinker with these ideas. Thank you so much for your suggestions!

  • @jinhocho94
    @jinhocho94 3 роки тому

    Great lecture and kindness. thanks Matthew!! You helped me a lot

    • @dellpi3911
      @dellpi3911 3 роки тому

      ua-cam.com/video/fRyUf-GY754/v-deo.html

  • @chiamatthew6829
    @chiamatthew6829 3 роки тому

    this is a life saver

  • @kvasios
    @kvasios 3 роки тому

    One question, could someone use Pontryagin's maximum principle or hamilton jacobi bellman eq as a starting point for problem formulation and take it from there or I am missing something? Thanks!

    • @MatthewKelly2
      @MatthewKelly2 3 роки тому

      I believe so. A "Trajectory Optimization" problem is just solving a special case of the Hamilton Jacobi Bellman (HJB) equations for a single point. In fact, you can use the full solution (optimal policy) from the HJB equations to reconstruct an optimal trajectory. The problem is the "curse of dimensionality" -- usually it is impractical to solve the HJB directly, but it is feasible to compute a locally optimal solution at a single point (trajectory optimization). I believe you could derive any trajectory optimization framework from the HJB equations, but I'm less familiar with the mathematics involved, and find the derivations in this video easier to understand. You can definitely set up and solve trajectory optimization starting from the Pontryagin's maximum principle, and that is closely related to how "indirect" methods for trajectory optimization work. I know less about those techniques, but if you follow the references at the end of the presentation you'll be able to learn more.

    • @kvasios
      @kvasios 3 роки тому

      @@MatthewKelly2 Thank you so much for taking the time to answer! I really have a better understanding now... good point about the Pontryagin's max principle and the "indirect" methods indeed.

  • @kvasios
    @kvasios 3 роки тому

    An exceptional amount of value. Thanks!

    • @dellpi3911
      @dellpi3911 3 роки тому

      ua-cam.com/video/fRyUf-GY754/v-deo.html

  • @jomurciap
    @jomurciap 3 роки тому

    Thank you Matthew, excellent video.

  • @sonyerric2
    @sonyerric2 3 роки тому

    Can the F be further broken down to become ma_1?