Jump to content
AutoDesSys

The world got weird on me while I wasn't looking.


Recommended Posts

Ok.  Just started playing with chatGPT.  I asked it to generate a python script for formZ.  It was able to create something after I directed it to the python documentation page.

Note, it really didn't work. But, I am pretty sure that this is because the documentation is not remotely complete at this time.  So, chatGPT made some reasonable assumptions.  Such as there being a function or method "modeler.document()"  There is none.

Regardless, This is looks like a way to rough out ideas to clean up on.

 

 

chatGPT prompt: please provide me with a python script for formZ that generates 10 cubes each increasing in size by 10 percent starting with a volume of 1 cubic foot. Here is the formZ/python API documentation.

import math
from formZ import modeler

# Create a new document
doc = modeler.Document()

# Set the units to feet
doc.units = modeler.Document.UNITS_FEET

# Create a group for the cubes
group = modeler.Group("Cubes")

# Define the starting cube size and volume
cube_size = 1.0
cube_vol = cube_size ** 3

# Create 10 cubes
for i in range(10):
    # Create a new cube
    cube = modeler.Cube(modeler.Vector3(0, 0, 0), cube_size)

    # Add the cube to the group
    group.add_object(cube)

    # Increase the size of the next cube by 10 percent
    cube_size *= 1.1

    # Calculate the volume of the cube
    cube_vol = cube_size ** 3

# Add the group to the document
doc.add_object(group)

# Save the document
doc.save("cubes.fmz")

chatGPT description of its generated code: 

This script uses the formZ Python API to create a new document, set the units to feet, create a group for the cubes, and then create 10 cubes, each increasing in size by 10 percent and starting with a volume of 1 cubic foot. The script then adds the group to the document and saves it as "cubes.fmz".

 

 

Again, this code will NOT work.  But Later this week I am going to mess with it to see what it will take to make it work.  I am going try to find out what will be the minimal changes.   Some things I see right now are:

  1. Input statement is not correct (or I don't think this is an option right now.)
  2. as mentioned above "doc = modeler.Document()"  isn't a call or at least not currently implemented in the API.  I think chatGPT is making an assumption based on how other APIs for python work.  Though, it may see something in the documentation that I have not.
  3. as part of issue #2 "document()" call, there are no further methods such as doc.save("xxx")
  4. If you look at my prompt, I did not ask it to save the document.  I just want fZ to create the 10 cubes.

Any way... going to play with it.  Will give updates.  Looks interesting. Might even be a worse coder than I am.  Ha!

¢£

P.S.  chatGPT did not write any of the above text except for the code to build the 10 cubes.    All that crappy writing was mine and mine alone.

 

 

 

 

 

Edited by ¢hris £und
Had an error in my description of the code. I referred to an earlier generation of the script rather than the one I included in the post here.
Link to comment
Share on other sites

AI Labor

We are on but the precipice of this strange new technology. 
Looking down, or is it looking up, on what tomorrow has in store for us. 
A keen eye may see a bright future unfolding, as problems are solved with language as labor. 
But there is darkness that surrounds the unknown.  Fear in this darkness as labors are lost.
Where shall we turn when all the language is labored and we are needed no more? 

-Justin Montoya

 

Hey Chris.  I don't know why, but reading your post inspired me to write something creative for the first time in a long, long while.  Thanks for that.  I am optimistic about the future in general, but my mind can't help but worry about AI and it's overreaching presence in the years to come.  Hopefully my worries are unfounded and the world will still be a place without Terminators.  

Link to comment
Share on other sites

Not to plug another piece of software on this page, but I recently watched a video of an architect prompting ChatGPT to create a Python script to use inside of Grasshopper.  The code was pasted into GH and one line of code didn't work.  He copied and pasted this non-working line of code into Chat GPT, said it didn't work, and it fixed the issue.  The new script was generated, copied and pasted into GH... voila.  Really neat stuff. 

Link to comment
Share on other sites

Yea, the potential is definitely there.  It pretty much hosed its attempt here, but then it hasn't been trained on the fZ/Python API.  Interesting it "hallucinated" an answer any way.   I continue to explore the possibilities.  Just registered for a developer account.  Not a lot of information given until accepted, but there is a possibility that chat GPT could be integrated into formZ with python, as that is the language they are currently using too.  We just need a more complete python API to work off of.

It would be phenomenal just to ask formZ to build a script in app.  Even if it just gets the gist of what we want, it is a starting point.

One thing about the developer account, it mentions that it can be shared with 15 other users.  I don't know if this is just a limitation of it being in beta or this is a hard limitation.

Snow, as mentioned above, it wasn't able to quite do this with the Python API for formZ... even though it did try.  I still haven't gotten around to seeing what it would take to manually fix it.  I really had to be persistent with chatGPT to accept using the call for the formZ cube.  It kept attempting to use a call for another similar API.

 

To even get more nerdy, a couple of years ago I was trying to get a grasp of how a basic (read, real basic) neural network would work.  I chose at the time not to use any existing library or to use a reference materials to do so.  Just see if I could brute force my way into grasping some of the concepts with some of the bits and pieces I gathered already (wasn't much.) I wrote a bit of code and let it languish until the other night.  It doesn't actually do anything but populate the NN with random data and then alter the cell data based on neighboring cell data.   I asked chatGPT to look at the code and tell me what it was.  It recognize that it was a 2D neural network, which honestly surprised be a bit as I really had no idea if succeeded.  

Which led to a larger discussion of how NNs work.  One thing I got wrong is that I had confused what biases for weights and I don't think I included real biases at all.  Regardless, kind of makes me want to explore AI more from a nerd level. I doubt my little bit of hack level of coding would ever go anywhere, but I sure do find it to peak my interests.

Generative formZ modeling?  I don't know.    One thing of note, chatGPT would act as the interpreter only, other AI models would have to be incorporated to do something like this.

It is free if you just want to goof around with it, like I have been doing... I suggest anyone give it a try.  I just sign in with my google account.

There is a paid level of course... but not required.

https://auth0.openai.com/u/login/identifier?state=hKFo2SBOa0tuekR0YW9aN0ViVnIxd09jQ2NqZ1A1YXE3ZDRuUKFur3VuaXZlcnNhbC1sb2dpbqN0aWTZIGs5RUdvXzhFT0pncXJ6NC1SVExrVXJxcEkxOHY2Uktyo2NpZNkgVGRKSWNiZTE2V29USHROOTVueXl3aDVFNHlPbzZJdEc 

 

Exciting times!

 

¢£

Link to comment
Share on other sites

This could be so nice to use in formZ. I will have to try to goof with it but I feel a fear of the code version of this thing. I would like to think what it does (meaning I don't know) is hallucinate a million results but it has learned to only present results that are acceptable to us. To us this looks like magic. I have been using Midjourney to hash out image ideas and have actually used some of it's basic image premise to do work. Or to make texture maps.

With coding, I can't tell what I am looking at because I don't know python, so I can only use this if the results do something in the formZ window. If there is a little map you could point out to get a normal person who has no experience able to make a cube and then adjust the sizes by prompting, that would be amazing.

Then one might eventually in a few moments create a lighting tool that allows you to click a point on your model and it would create a new area light that shines on that spot, say 10 feet away.

Link to comment
Share on other sites

John,

Currently, there isn't a specific location or map available to show the desired location. I am considering upgrading my account to explore things in more depth. The free account provides access to chatGPT 3.x, while the paid account offers access to 4.x, which is significantly more advanced than 3.x.

For instance, chatGPT 4.x achieved a score of over 90% on the Bar exam, surpassing the requirements for every state in the U.S. (although it's quite evident I am not a lawyer). In contrast, chatGPT 3.x scored in the 40% range or lower.

I experimented with MidJourney back in August, and it was intriguing. Now, I'm witnessing truly impressive outputs. Previously, it struggled with rendering accurate eye representations, but now it seems to have no issues. However, since I'm not inputting the prompts, I'm unsure about the level of effort being put in—whether it's all AI-generated or a combination.

Regarding formZ, I envision two possible scenarios:

  1. One-off inquiries where the user wouldn't see the code. formZ/LLM* would receive input from a prompt, parse it, generate code or a macro, and execute it in formZ. This may or may not produce a set of executable instructions for formZ. If direct parsing of formZ commands isn't feasible (which is likely the case), the generated code would be applied behind the scenes.

  2. Creation of reusable scripts or plugins. This approach would focus on developing tools with repetitive uses in a general sense. These tools would integrate into the interface, similar to the built-in tools. Option 1 should also be able to leverage this approach. A complete set of code would be generated and exposed to the creator or prompter (?) for further modification if necessary. Additionally, other segments could be added through further prompts to the LLM.

To maximize effectiveness, I believe both modes should coexist alongside each other.

One seemingly mundane aspect that genuinely excites me is the LLM's ability to document the code. Currently, a significant portion of the Python API remains undocumented. Although the API might exist, it becomes irrelevant if users cannot understand how to utilize it. I have personally experimented with some of these undocumented calls through trial and error (and more errors) until finally achieving the desired outcome. However, this trial-and-error process is not viable for most people and situations. The LLM, on the other hand, can examine the code comprising the API and document it for ADS (although users lack the capability to do so, as the LLM would require access to the private source code, not just the public API). This feature would significantly expedite the process, which is currently lengthy and arduous.

Cheers!

¢£

*LLM = Large Language Model such as chatGPT

 

P.S.  I used chatGPT to clean up my text in this post.  I realize that when I write, it comes out as a stream of consciousness and doesn't really make a lot of sense.

Link to comment
Share on other sites

  • 1 month later...

I was watching a Generative AI discussion last week, it was said that Midjourney will be offering 3D very soon, perhaps even with version 6. I don't know what that means, I assume 3D scenes will be set up by prompt. It could mean it generates a single object from a set of prompts. I think we will soon see 3d implementation. People are hoping it will make 3D meshes of their already generated 2d Midjourney results.

Edited by johnalexander1571
Link to comment
Share on other sites

It will be interesting to see how the data is formatted.  Will it be like an STL file, or will it be more like a smooth/control data?

One thing I don't like about this kind of stuff is: While the images are stunning, If you have a specific idea, you never really get it.  You get something that is passable, or worse yet, you accept what you get and get distracted from the original idea.  

¢£

Link to comment
Share on other sites

  • 1 month later...
  • 2 weeks later...

Apologies for the late response.

I do think it is coming for 3D. Whether or not ADS gets behind it, I of course don't know.  I guess I hope they do.

I presume that initially the Geometry might suck and be used more for visualization instead of build quality data. But, I have been surprised before. Then again, it may not even be required to have 3D data for visualizations since the 2D vis can look just like a rendering.

I can see a time when you tell a machine model that you want a machine to do "X" and it will produce a data set to fit.  Also for Electronics CAD.

 

I am bad at predicting, but I would say 1 to 5 years for 3D engines to be integrated with LLMs.  Question for me is: will that spur human creativity, or will it be a crutch?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...