+ Reply to Thread
Page 27 of 30 FirstFirst ... 17 25 26 27 28 29 ... LastLast
Results 261 to 270 of 292

Thread: A new free vehicle dynamics resource - Dan's Vehicle Dynamics Corner

  1. #261
    Senior Member
    Join Date
    Mar 2005
    Location
    Australia
    Posts
    1,690
    Rory,

    Thanks for the link. It explains much.

    It seems that "Verification & Validation" are two relatively new buzzwords that entered the Engineering arena in the last generation or so (~20+ years) with the increased use of Computer Aided Engineering. Given that much of the use of CAE these days, such as, say, the space-frame designs in FS/FSAE, is such a debacle, it is no surprise to see these buzzwords being so abused.

    The linked document, the work of ~30 experts for over five years (!), shows just how far standards have fallen. Some brief examples.

    1. Title has "Guide for [V&V]..." in bold large font, with "Overview" in a lighter font, making it less obvious that the document is only a summary.

    2. My quick scan of the Overview found quite a few obvious typos, which is hardly encouraging for a document purporting to "... help establish confidence in the results of complex numerical simulations." and to "... develop standards for assessing the correctness and credibility of ...".

    3. Even though this "Overview" is apparently fully half the length of the actual "Guide" itself, and that it says
    "The [Guide...] ends with a Glossary, which perhaps should be reviewed before venturing into the main body of the text. The Glossary section is viewed as a significant contribution to the effort to standardize the V&V language so all interested participants are conversing in a meaningful manner.",
    the Overview itself contains NO Glossary!

    What has modern society got against CLEAR DEFINITIONS!!!

    Well, here is a clue. A bit further into the Overview we find this pearl.
    "Much of V&V is not a ‘hard’ science, which is the bread-and-butter of most of computational mechanics, but more a ‘soft’ science like the philosophy of science, where differing points of view have merit, and need not be evaluated as either right or wrong."

    Aaaarggghhh....!!!!!

    More of the "There are no right or wrongs, everyone's opinion is equally valid, so all the kiddies get a gold-star..." CODSWALLOP that the education system has been peddling for at least two generations now.

    No wonder so few FS teams can build "...a small car that carries one person 30 kilometres...".
    ~~~o0o~~~

    Which gets us back to Danny's "hand calc", which is in no way a "validation" of anything, even by the sloppy definition given by the ASME (p7 of above Overview). It is more like the spiel given by the soap salesman who tells us "...it is guaranteed to wash up to 95% brighter...".

    And back to Danny's much repeated claim that the Chassis-Sim computer simulation was "..crucial in making the decision...", when the same result came out of a "hand calc".
    ~~~o0o~~~

    Ahh, standards! Spiralling down the S-bend at an ever faster rate...

    Z
    Last edited by Z; 01-18-2017 at 12:28 AM. Reason: Oops, some typos.

  2. #262
    Senior Member
    Join Date
    Mar 2008
    Location
    Brighton, MI
    Posts
    686

    Validation, Verification, Vindication

    I don't want to hijack Danny's thread, but a professional discussion on validation of FSAE car subsystem designs is long overdue. I'd suggest a new thread to handle the traffic on this subject, but it should be a student or judge submittal. A list of several starter possibilities for chassis related topics would include Ride/Roll Steer design, TLLTD specification and achievemnt, "toe steer" (aligning moment compliance steer), wheel camber stiffness, understeer/oversteer goals, steering effort, max lateral acceleration, and roll gradient just to name a few of my favorites.

    In the industry, these factors (Subsystem Design Goals) are set in place before hardware is designed or selected to meet them. They are not 'targets' which must be hit exactly, but a range of measureable parameters that produce a vehicle response on the next vehicle 'up'. Sometimes they are scalars, sometimes they are a multidimensional box (spider chart anyone ?) that represents the best you can achieve depending on time, material properties, cost, packaging, availibility, and serviceability.

    For example, your Impact attenuator has some specs which can only be met after some form of testing. not necessarly based on crashing a car into a barrier at a specified speed, or a car to car head on, or a car to car T-bone. But some form of proof is needed to assure judges (and juries) that the final product had sufficient merit to be installed in a car as accepted by rules.

    If you want verification, specify that each and every car in the top 10 must be crashed and deceleration levels for a head injury index below so many g's for so many milliseconds before any awards are handed out. The other 'goals' I mentioned could be scored and integrated into a final weighted average.

    Too may posts here reveal the notion that there is far too much idealized estimation of performance compared to what actually would show up in measurement. Sure, you can set goals (and their range) easy to achieve, but the chain of specification matching follows the entropy from the ground up to the steering wheel: Shit flows downhill.

    The quality of computer modeling and simulation today is at a sufficiently high level that they can take you places not achievable out on the road. But sooner than later, a real physical test of your theories and practice must be done to satisfy the public that your work is more than just good luck.
    Attached Images

  3. #263
    Z,

    I'd love to see what you just said about hand calculation of aero to an F1 Technical Director and if necessary I can make the appropriate introductions.

    Verifying what the car is doing via hand calculations from race data is a critical engineering skill. It is a skill that is dangerously atrophying. If you have an issue with this Z then I suggest you take it up with the likes of Clarence "Kelly" Johnson, Patrick Head, Sam Michael and Ross Brawn to name a few. What these gentleman all have in common is they have all achieved results on the big stage.

    In particular one of the things that made Kelly Johnson unique was his ability to look at an aerospace structure and he could tell you the pressure to within +/- 0.3 psi. That came from hours of knowing his basics and confirming it with data.

    If you have any doubts the following speaks for itself,

    http://www.chassissim.com/blog/chass...it-lemans-2016

    and

    http://www.chassissim.com/blog/chass...15-lemans-24hr

    and

    http://www.chassissim.com/blog/chass...thurst-12-hour

    ChassisSim's track record does speak for itself.

    Danny Nowlan
    Director
    ChassisSim Technologies

  4. #264
    It's a very old term that has been used since the time of the Caesars. It's original meaning is a combination of strength. health and worthiness. So to validate an observation or result would imply testing it's strength, health and worthiness.
    Competition Systems
    Melbourne, Australia
    www.compsystems.com.au

  5. #265
    Senior Member
    Join Date
    Mar 2005
    Location
    Australia
    Posts
    1,690
    Quote Originally Posted by ChassisSim View Post
    ChassisSim's track record does speak for itself.
    Danny,

    If you want the above quote to sound like well-reasoned engineering analysis, and not just marketing BS, then please tell us about all the ChassisSim users who lose races. No need to give names of all the Teams, just a statistical breakdown of all ChassisSim users and their placings in all races they have entered.

    That way your track record will HONESTLY "speak for itself".

    (Questions for students of logical reasoning:
    Q1. If a car is painted red, and if that car wins a race, then is it logically valid to conclude that the car won the race BECAUSE it was painted red?
    Q2. If a race-team uses ChassisSim, and if that team wins a race, then is it logically valid to conclude that the team won the race BECAUSE they used ChassisSim?

    In the deepest depths of the last Dark Ages, a thousand years ago, all small boys who were lucky enough to go to school would instantly know the answers to the above questions, because they studied such stuff as part of the Trivium (= the trivially-easy subjects). Shame it ain't so anymore. )
    ~~~o0o~~~

    EfiOz,

    Yes, indeed. A briefest look in a dictionary shows that "validate" stems from Latin "valere" = "to be strong".

    So it is beyond me how a hand-calc can be claimed to "validate", or "test the strength of", a computer simulation, given that the computer simulation is nothing more than a more detailed version of the hand-calc. At most, the hand-calc provides only a first-step verification that the computer-code is roughly in the ball-park, and does not have huge bugs. (This "code verification" was hinted at by Rory a few posts up, and in the linked V&V document.)
    ~~~o0o~~~

    I also fully agree with Bill that a dedicated thread should be started to deal with this whole business of "validation" of student's theoretical designs. I won't start such a thread, because I don't need it.

    But it is quite clear that very few students in FS/FSAE have any understanding of what reasonable validation is. A good example is the claimed validation of FEA-simulated torsional-stiffness numbers for frames, perhaps the easiest such simulation->physical-testing to get right. My quick skim through some Design Reports I have come across suggests that if the students wore blindfolds and threw darts at a dartboard (ie. did some simple "hand-calcs"!), then they would do no worse than their claimed torsional-stiffness "sim/test" numbers.

    In fact, I am now wondering if such "dartboard" hand-calcs were used for most of the numbers in the DRs?

    Z
    Last edited by Z; 01-26-2017 at 07:19 PM.

  6. #266
    Z,

    There is a very simple question to be asked. ChassisSim's track record is self evident. But the critical question to be asked is what's yours? What motor racing categories have you worked in that allows you to stand in judgement over everyone else? I'm not having a go it's a genuine question.

    However what is truly mind boggling here is that you can't seem to comprehend that one of the ultimate tests and indeed validation of whether things add up is comparison to logged data. In particular what don't you understand about that every damper pot married up to the appropriately measured setup is actually a load cell.

    It would be wise for you to reflect on this before responding.

    Best Regards

    Danny Nowlan
    Director
    ChassisSim Technologies

  7. #267
    Hi Z,

    Thank you for your questions.

    I don't normally do this, but reading what you have said, made me want to reply.

    As a final year engineering student, and having competed in FSAE for three years straight, a "hand calculation" as a "validation" method is a necessary requirement (this is supported by Professor Reza Nakhaie Jazar in his book the 'Vehicle Dynamics - Theory and Application'), or else it would just be a "he say or she say"?
    As you may be aware, there is a limitation on the time available to "test", as well as the funds available to do extensive "testing". In the first two years of working with my fellow FSAE engineering students, I experienced firsthand how costly and ineffective it was in not using "hand calculations" and instead just rely on estimates and "testings".
    Upon becoming the System Head Leader, I was motivated to speak to PHDs, Professors and Engineers in the field as much as possible in order to gain their knowledge in Vehicle Dynamics and "hand calculations".
    From this, it was evident that the use of "hand calculations" was a necessary tool to "validate" the engineer's predications and outcomes.
    Why? This is because it provides the basis for an engineer to understand the following:
    1. How COfG with corner stiffness causes under/over/neutral steering;
    2. The stresses through suspension members;
    3. RC height;
    4. The roll migration;
    the list goes on.

    Danny has provided a lot of support, guidance to myself and my team. He was kind enough to provide us with full access to the ChassisSim. I used the ChassisSim to run a simulation to confirm if the "hand calculation" would accurately predict the car movement.
    For example, if the car had a higher RC, or a different camber, etc etc, what would happen to the car as it moves around the track? In my experience, I have always done a "hand calculation", followed by a simulation in the ChassisSim, then finish with real life "testing".
    I can confidently say that, in many instances, all 3 results have consistently supported each other with minimal differences (often only by 0.5 - 1 second). In my opinion, the ChassisSim is quite accurate, and in using it, I was able to cut down Time and Cost, by reducing testing and calulcation time.
    To me, our team has "validated" that the ChassisSim does in fact "VALIDATE" our predications and we will continue to use it as it does cut down a lot of time and costs.

    Going back to what Danny said earlier, which I think is quite crucial here....Where are all of these questions coming from? Have you used the ChassisSim? Have you done a "hand calculation" before and not have the same results as a ChassisSim?
    Are all these questions/accusations coming from firsthand experience/failed validations?
    If so, perhaps you can provide everyone with a detailed recount of your calculations, testings and results to prove what you have said about hand calculations and the ChassisSim.

    Best Regards,
    Mike

  8. #268
    Senior Member
    Join Date
    Mar 2005
    Location
    Australia
    Posts
    1,690
    This is becoming tedious, but one more time...
    ~~~o0o~~~

    Danny,

    1. For the record, the first subject of the Trivium (ie. the really easy stuff) was Grammar. Your grasp of such is poor. At the very least you should end your questions with one of these "?". Communication works much better with this attention to detail.
    ~o0o~

    2. I am not currently selling any motorsports products, so have no need to spruik a "track record". It is irrelevant, but if you want to find it you can do your homework. And, BTW, I am still waiting for that list of all the losing Teams with which you are associated.
    ~o0o~

    3. Regardless of my "track record", I have been around long enough to know how many beans makes five, and I can smell bulldust from the proverbial country mile away. Your posts and blogs are swamped with it. Your credibility would improve if you toned the malarkey down a bit. But, then again, you are working in motorsport, which is fuelled by the stuff, so maybe keep shovelling it (although preferably not in this notionally "engineering" forum).
    ~o0o~

    4. Getting back to V&V, it seems you are still making no attempt to distinguish between "verification" and "validation". The above-linked document (the work of 30 "experts" for 5 years!) makes some attempt to define these two terms. But, I guess when you are swimming in a sea of malarkey of your own making, clear DEFINITIONS are a bit redundant.

    Anyway, your "logged data" is a weak validation of both the computer-sim and the hand-calcs. On the other hand, the hand-calc is only a weak verification of the computer code. Both these Vs are weak because of the great many sources of error that can enter the processes. Perhaps you can try listing those many sources of potential error? If you do, then I will happily add the many other sources of error that you miss.

    Remember, ANY MEASUREMENT IS MEANINGLESS, WITHOUT KNOWLEDGE OF ITS ACCURACY!!!
    ~o0o~

    5. Finally there is the pure-BS that started all this, namely your claim that ChassisSim was "...crucial in making the decision..." to use the dive-planes. That decision was a no-brainer, and any half-competent race-engineer would have made it as a matter of routine. Your subsequent justification (in response to JT A., near bottom of last page) that "Where ChassisSim came to the party was actually quantifying the effect [the dive planes were] going to have." is also clearly BS, given that you also claimed that "a hand calc ... was spot on"!

    Summing-up your own argument.
    "A race-engineer has experience of dive-planes, and has a sufficiently accurate estimate of their performance (CL.A) that he can do a hand-calc of their potential speed advantage through a corner, and then after some real laps that hand-calc is validated by the car's logged data as being "spot on"."

    Q. Why, then, is a computer-simulation "crucial" to the decision to use said dive-planes???
    A. It is NOT!!!

    Upon reflection, this "cruciality" is pure marketing-malarkey.
    ~~~o0o~~~

    Mike90,

    If you have really been here for 3 years (and are not just Danny-in-disguise?), and if you have been paying attention, then you should know that I am responsible for by far the majority of hand-calcs on these pages. I am also a strong supporter of carefully thought-through computer simulations. And I am also one of the strongest advocates here for real-world testing, and have pointed out many times how simple, quick, and cheap such testing can be. But most importantly, I also repeatedly stress that all of these calcs/sims/tests are utterly USELESS if they are not qualified with a careful estimate of their ACCURACY, or lack thereof!!!

    Next you say "... a "hand calculation" as a "validation" method is a necessary requirement ...". Clearly, you are not paying any attention to this thread! Now, either go back and read the linked document and use its DEFINITIONS of V&V, or else put forward your own CLEAR DEFINITION of what you mean by "validate".

    Without such clarity, the rest of your post is meaningless.
    ~~~o0o~~

    Final note to users of racecar simulators, such as ChassisSim.

    The "global-tyre-grip fudge-factor", or whatever it is called in your simulator, is hugely misleading. It can make the worst simulator in the world appear to be "...accurate to within 0.001 seconds per lap, or better!"

    Good engineers understand this, and are not fooled.

    Z
    Last edited by Z; 02-01-2017 at 06:11 PM. Reason: Details.

  9. #269
    Hi Guys,

    There have been some good points made in this thread and there also seems to be a lot of misunderstanding and arguments over the definition of the words validation and verification. Personally, I don’t care what you call it, just what you have done and what you can prove. The article posted by Rory and the validation that Z seems to be talking about, is validation of the model not the aero loads. This being where specific tests need to be performed in a rigorous manner, with strict control over the systematic and unsystematic errors, it crucial for models and simulators just like ChassisSim. Ideally multiple tests should be repeated, and performed in a varied range of scenarios, to ensure the required confidence in the model, and I’m sure that Danny and his partners have done such tests to ensure that ChassisSim is sufficiently accurate for its uses. As the V&V Guide states “the associated accuracy requirements for the computational model predictions are based on the intended use of the model”.

    Danny’s use of the hand calcs to verify [or insert preferred term] his estimation of the aerodynamic effects of the new flap, using shock pots provided evidence supporting his estimation of the aero load. I, like Mike, Danny and Z, am a huge fan of hand calcs, they offer a quick and easily sanity checked method of supporting arguments, are a crucial tool in an engineer’s arsenal and often underutilised. Sure other methods could be used to reduce the error and get a more accurate estimation of the aero load: wind tunnel testing, testing with softer springs, higher speed straight line runs, using a flatter surface, etc. But the critical point is that Danny’s estimation of the aero loads and his hand calcs using track data were accurate enough for its use. As Danny said in his video if it was a more tightly controlled formula a higher level or correlation would be required.

    The idea of using models is that you have a tool that allows you to probe the unknown. Either using it to simulate tests that you physically can’t or don’t have the resources to perform. For example, the reason why you use chassis FEA is so that you can test many different chassis designs quickly without prototyping. An FSAE team will typically only be able to validate one chassis, the one they build, and cannot validate each chassis that they FEA. Likewise, using a lap time simulator like ChassisSim, you cannot and don’t need to fully validate the model every time you use it, the purpose of a model is to make predictions outside of your experimental data.

    That being said, for an FSAE team to chuck the parameters of their car in to a simulator like ChassisSim and thinking that it will be good enough for what you need would be poor engineering. I would consider this a new and unproven model, which need to be tested to make good engineering decisions. It definitely should be tested at the very least with handcalcs, and comparison to on track data. Again the critical point is that it is validated enough for its use. V&V guide: “if the model adequately predicts some related but typically simpler instances of its intended use…then the model would be validated to make predictions beyond the experimental data for the intended use”.

    Our team to uses ChassisSim and have found it to be very useful and Danny very supportive.

    Cam Warne
    Monash Motorsport team member 2012-2016
    Chief Engineer 2015

  10. #270
    Be aware that a hand-calc is a model as well, and as such, should undergo its own assessment. You are attempting to model some aspect of reality using a algebraic representation. Just because it is not done using numerical integration or hundreds of CPU-hours, this does not mean it cannot undergo (or is immune to) proper verification and validation.

    You can still conducted all the process in V&V 10 for a hand calculation. E.g. Code Verification: Did I actually do the math correctly in my head or on my calculator. Calculation Verification: Is my calculation susceptible to wild, non-physical fluctuations if I vary an input parameter within a reasonable range? Validation: Does the calculated result align with the physical result?

    Fundamentally, there is very little difference between the applicability of hand-calcs or numerical calcs. All that changes is the order and rapidity of the calculation process.

    Note that the majority of the committee for V&V10 (including the chair, the incomparable Dr Len Schwer) have backgrounds in crashworthiness engineering and modelling. These are domains where lives are at stake if a finite element mesh is not properly resolved, or if the strain-rate sensitivity parameters for your steel material model have not been validated in its plastic region. In these domains, V&V matters, and it matters beyond reputation or revenue. Engineers in crashworthiness or certain areas of the defence industry have to understand what their models are representing and their similitude with reality. Equally, they need to be acutely aware of the limitations of their models, and that no engineering prediction is a panacea.
    Last edited by rory.gover; 02-01-2017 at 02:33 AM.

+ Reply to Thread
Page 27 of 30 FirstFirst ... 17 25 26 27 28 29 ... LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts